►
From YouTube: CephFS Code Walkthrough: CephFS Mirroring Part 1
Description
Presented by: Venky Shankar
Part 2: https://www.youtube.com/watch?v=oMs7appb20s
Schedule: https://tracker.ceph.com/projects/ceph/wiki/CephFS_Code_Walkthroughs
A
Say
a
warm
welcome
to
this
code.
Walkthrough
talk.
This
is
about
the
newly
introduced,
this
feature
in
pacific,
which
is
ffs
mirroring
or
others
ffs
snapshot
mirroring.
So
the
idea
is
to
enable
the
cluster
admin
or
the
operator
to
you,
know
selectively.
A
Add
directories
passing
for
snapshot
synchronization
to
a
remote
cev
file
system,
so
this
so
the
idea
is
to
have
to
do
to
do
it
asynchronously,
which
means
we
have
a
bunch
of
daemons
involved
that
do
the
job
of
copying
data
from
one
cluster
to
another.
A
You
can
also
kind
of
synchronize
data
from
one
file
system
to
another
file
system
in
the
same
cluster,
the
that's
pretty
much
transparent,
so
the
idea
remains
the
same
okay.
So
I
plan
to
split
up
this
talk
into
kind
of
four
sections:
the
the
the
code
walkthrough
at
least
starting
with
you
know
the
structures
involved.
You
know
we
have
different
things.
We
need
to
persist
in
the
cluster
in
in
radar's
object.
So
where
are
you
know
what
us
what
is
stored
where
so?
A
Yes,
essentially
what
structures
are
involved
and
what
gets
persisted
and
where
that
is
first
part.
The
other
part
is
the
interface,
so
you
see
from
the
operators
or
the
cluster
admins
point
of
view
how
to
actually
enable
a
directory
nsf
file
system
for
mirroring.
A
Then
we
go
into
details
of
how
it's
handled
like
you
know.
There
are
a
couple
of
things,
so
we
have
the
manager
module
that
provides
the
interface
and
the
actual
daemon
that
does
the
copy
of
data
and
snapshotting
right.
So,
let's
start
with
the
first
section,
which
is
the
structure
okay.
A
A
You
can
think
of
this
as
a
attaching
a
remote
file
system
which
can
be
in
a
separate
cluster
over
a
van
to
to
a
file
system
in
another
cluster,
which
is
like
establishing
a
relationship
between
a
primary
file
system
in
the
primal
cluster
to
file
system
in
the
in
the
secondary
or
the
remote
cluster,
so
that
we
have
a
concept
called
sphere.
A
So
what
we
would
need
to
do
is
obviously
you
know
when
the
operator
or
the
cluster
admin
try
to
add
appear.
All
this
data
needs
to
be
saved
somewhere.
You
know
the
pr
information,
what
we
call
history.
Buses
is
somewhere,
so
that
is
protested
in
fs
map.
So
you
can
see
we
have
a
class
file
system
here.
Surface
map
is
just
a
file
system.
App,
that's
used
by
cfs.
A
You
know
you
can
subscribe
to
nfs
map.
It's
you
know
just
like
you
can
subscribe
to
my
hsd
map
and
mgm
app,
and
things
like
that.
So
any
updates
to
the
fs
map
done
by
anyone
in
the
cluster
will
be
you
get
notified
for
with
that
particular
changes.
A
A
So,
for
that
particular
file
system
we
have
mirror
info
mirror
info
is
nothing
but
some
basic
information
like
is
mirroring,
enabled
or
not,
and
a
bunch
of
peers,
which
is
which
comes
here
so
peers
is
nothing
but
a
set
of
beer,
which
is
an
endpoint.
A
So
a
file
system
can
have
multiple
peers
means
you
can
attach
more
than
one
peer
to
a
file
system
and
the
mirror
daemon
kind
of
takes
care
of
you
know
replicating
the
snapshots
to
each
of
the
peers,
so
peer
is
identified
by
a
uuid
which
is
like
a
unique
identifier,
for
that
particular
peer
and
a
cluster
info
cluster
info
is
contains
just
the
right
amount
of
information
to
actually
reach
out
to
the
peer
to
actually
start
talking
to
the
peer,
which
is
nothing
but
a
cluster
name
file
system
name
and
the
client
name.
A
So
by
looking
at
that
this,
you
might
think
that
you
know
the
cluster
name
is
a
remote
filesys,
remote
cluster
name
and
which
is
true
because
with
this
interface
right
now,
what
we
require
is
that
the
remote
clusters,
self-configuration
files,
should
be
present
in
the
source
clusters
or
the
primary
clusters
host.
A
So
the
that
would
involve
the
manager
module,
the
host
running
the
manager
module
and
the
host
running
the
mirror
demons.
That
is
one
part,
but
we
can
avoid
this
by
something
called
as
bootstrapping
up
here
we
come
to
that
later,
but
we
even
to
bootstrap
appear.
A
We
still
need
some
information
regarding
that
peer,
just
basically
the
uuid,
so
we'll
come
to
that
later
regarding
bootstrap,
but
for
now
you
know,
we
can
think
that
you
know
the
remote
clusters
configuration
file
is
present
in
the
primary
cluster,
so
this
is
and
we
have
the
usual
encode
decode
routines
in
the
source
file.
You
have
the
cluster
info
and
the
pier
and
the
mirror
info,
and
once
you
dump
the
file
system,
you
get,
you
know
the
piers
if
mirroring
is
enabled-
and
you
have
peers
so.
A
So
the
reason
for
storing
this
nfs
map
is
the
mirror
demons.
You
know
to
to
start
replicating
the
snapshot
directory
snapshots
to
the
peers.
They
need
to
know
about
what
peers
have
been
added
or
removed,
or
the
pr
updates.
So
the
mirror
demon
kind
of
you
know
starts
subscribes
to
the
fs
map
and
you
know
once
it
gets
nfs
map,
it
goes
to
the
pl
list
and
it
and
on
pr
update
it's
just
you
know
it's
just
the
dips.
It
needs
to
know.
A
A
You
know
data
structures
and
we'll
we'll
cover
this,
while
going
through
the
pi
bind
mirroring
code,
which
is
the
manager
module,
because
we
store
a
bunch
of
things
in
in
products
directly
in
certain
objects,
so
that
will
be
covered
later,
but
this
is
the
essence
of
it.
So
you
know
this
is
the
main
thing
where
you
know
we
store
the
peer
information
fs
map,
so
that
anyone
who
subscribed
to
the
fsmap
can
get
a
list
of
of
peers.
A
Okay,
so
and
now
I'll
come
to
interface
so
for
interface.
The
best
way,
I
guess,
is
to
just.
A
Open
the
docs
and
just
explain
the
interface
and
the
code
okay,
so
the
interface
is
provided
by
the
manager
module,
which
is
called
the
mirroring
module
so
before,
which
is
so.
The
mirroring
module
is
disabled
by
default,
and
you
need
to
enable
it
by
running
self
manager,
module
enable
mirroring.
A
We
will
go
into
detail
in
for
the
minute
for
the
manager
module
later.
Okay,
once
that
is
done,
you
can
enable
mirroring
on
a
file
system.
So
where
there's
done
is
so?
The
interfaces
are
hooked
up
in
the.
A
Monitors
yeah,
so
we
have
certain
fs,
so
so
the
mirroring
related
commands
are
kind
of
suffix
with
fs
mirror.
A
This
is
the
internal
interface
that
is
used,
but
if
you
go
to
the
manager,
interface
and
the
manager
interface
is
actually
ffs.
Snapshot
mirror
enable
this
calls
into
the
monitor
interface.
So
you
know
always
make
sure
when
you're
using
a
mirroring
module
use,
the
sffs
snapshot
mirror
prefix
rather
than
ffs
mirror.
It
can
be
a
bit
confusing
but
yeah.
So
here
we
define
you
know
so
we
have
you
can
map
this
to.
You
know
the
actual
interfaces.
It's
like
enabling
a
particular
file
system
for
mirroring
disable.
A
Pretty
because
all
you
need
to
do
is
put
it
on
the
fs
map.
So
let's
do
that.
So
this
is
the
handler
for
when
you're
adding
up
here,
you
extract
whatever
is
needed,
such
as
the
uuid,
the
remote
cluster
spec.
The
so
spec
is
nothing
but
you
know
a
double
of
the
client
name
which
is
used
to
access
the
remote
file
system
and
the
cluster
name.
So
that's
called
the
spec.
A
You
extract
the
spec,
and
then
you
operate
on
the
fs
map
by
the
modifier
fs
map
call-
and
you
just
add
up
here
to
it
so
once
that
is
done,
the
dpr
information
gets
recorded
in
in
fs
map,
and
anybody
who
has
subscribed
to
the
fsmap
gets
and
gets
notified
that
there
is
nfs
map
update,
which
in
this
case
is
a
mere
demon.
A
We
will
come
to
how
it
does
that,
but
yeah
once
it
gets
an
update
and
figure
out
what
has
what
peer
updates
happened
and
and
such
likewise,
you
can
disable
mirroring
on
a
file
system.
A
It's
pretty
simplistic,
do
the
same
thing
again:
remove
beer,
you
go
and
operate
on
the
fs
map
and
you
call
peer
remove
prior
and
peer.
Remove
interfaces
in
the
fs
map
is
pretty
simple.
It
just
maintains
a
list
appears,
so
you
just
add
or
delete
from
it.
A
That's
for
the
peer
interface,
so
this
is
the
this
is
the
the
main
thing
we
have
right
now
and
yeah
so
yeah
make
sure
you
to
use
the
fsnapshot
mirror
interface
rather
than
the
self-ffs
mirror,
interface,
okay
and
yeah.
So
the
actual
implementation
of
the
fs
snapshot
mirror
thing,
we'll
we'll
we'll
explain
in
detail
when
we
are
doing
the
the
the
mirroring
module
code
walkthrough.
So
the
mirroring
module
also
provides
interfaces
to
add
and
remove
directories
which
are
for
mirroring.
A
So
you
can
add
using
the
fsn
option,
mirror
ad.
You
can
add
a
directory
for
a
mirroring
and
remove
it.
Likewise,
one
thing
to
note
is
only
absolute
directory
parts
are
allowed.
So
if
you
kind
of
the
paths
are
normalized
internally
in
the
manager
module,
if
you
do
something
like
ap
or
directory
op
and
b
again,
that's
equivalent
to
oab,
so
both
are
equivalent.
A
And
yeah,
the
important
thing
is
once
a
directory
is
added
for
mirroring
any
of
its
subdirectories
or
ancestors
up
till
root
are
disallowed
to
be
added
for
mirroring.
That's
because
the
way
we
synchronize
snapshots,
so
the
mirror
daemon
actually
does
a
full
copy
of
data
of
the
snapshot,
data
from
the
primary
to
the
remote
file
system
and
then
takes
a
snapshot
in
with
the
same
name
as
the
snapshot
being
transferred
in
the
remote
file
system.
Due
to
that,
it's
a
bit
tricky
to
allow
sub-directory
and
ancestors
for
mirroring.
A
You
know
once
we
have
configured
a
directory,
so
that's
that
is
disallowed.
So
if
you
have
added
say
in
this
case
d0
d1
d2,
so
the
mirroring
module
doesn't
allow
you
to
add
d1
and
anything.
You
know
below
d2,
which
is
like
d3
and
anything
below
that
yeah,
which
means,
if
you
add,
if
you
schedule
the
root
of
a
file
system
formatting,
you
essentially
cannot
add
any
other
directories
any
file
system
for
mirroring,
because
anything
everything
will
be
below
root,
all
right,
so
yeah.
That
is
the
interface
part.
A
Let's
do
the
actual
fun
part,
which
is
the
manager
module,
so
yeah
manager
module
is
in
by
bind
mgr
mirroring
audio,
so
yeah.
So
this
is
pretty
much
not
interesting
this
source,
because
all
it
does
is
sets
up
the
commands.
A
So
you
can
see
we
have
something
called
as
enable
mirror.
Disable
peer.
Add
these
just
dispatch
to
the
the
actual
machinery
which
is
present
in
the
which
is
present
in
this
source
by
bind
gr,
mirroring
fs
snapshot
mirror.pipe
right,
so
the
module
source,
just
you
know,
hands
over
just
dispatches
the
command
to
the
actual
implementation
in
the
snapshot
mirror
source.
A
So
you
can
so
you
can
see
you
can
you
have
enable
disable
peer
ad,
pure,
remove
pls?
You
can
list
a
bunch
of
appears
for
a
file
system,
we'll
come
to
bootstrap
interface.
A
The
dir
map
and
the
mirror
distribution
are
really
not
intuitive
right
now
and
they
are
not
really
a
function
for
use
because
we
only
support-
or
at
least
in
pacific
right
now
we
support
running
just
a
single
mirror
demon,
but
the
machinery
is
already
there
to
support
running
multiple
mirror
demons
in
active,
active
mode,
which
means
you
know
the
directory.
Spr
load
is
spread
across
a
bunch
of
mirror
daemons
for
congruent
synchronization,
but
for
pacific
right
now
we
only
support
running
a
single
mirror
demon.
A
We
plan
to
enable
h
a
active,
active
pretty
soon
so
yeah,
let's
see
okay
before
we
go
into
these
each
of
these
you
know
interfaces,
let's
see
what
what's
how
the
flow
is
done
in
the
actual
snapshot
mirror
part
okay.
So
what
happens
is
so
these
snaps?
If
you
see
the
snapshot,
mirror
source?
There
are
two.
A
There
are
a
couple
of
interesting
things
here:
one
is
the
manager
module
needs
to
know
about
which
mirror
demons
are
running
or
how
many
mirror
demons
are
running
or
how
to
actually
talk
to
a
particular
mirror
daemon.
A
The
reason
for
that
is
the
interface
for
adding
and
deleting
direct
undergoing
directories
for
mirroring
is
through
the
manager,
module
and
the
manager
module
needs
to
hand
over
these
directories
to
the
mirrored
daemons
so
that
the
email
demons
can
start
replicate
synchronizing
snapshots,
so
the
management
needs
to
know
which
merry
demons
are
running
and
how
to
talk
to
them.
A
Okay.
So
there
are
a
couple
of
things
here:
struct.
There
are
a
couple
of
structures
involved
here,
so
the
manager
module
and
the
mirror
daemons
kind
of
corporately
talk
to
each
other.
Using
you
know,
radar's
objects
via
watch
notify.
So
what
happens?
Is
you
have
this
special
index
object
which
lives
in
the
metadata
pool
of
the
file
system
and
the
file
system?
This
is
all
in
the
primary
file
system
in
the
source
file
system.
A
We
have
this
ffs
underscore
mirror
object
that
is
created
when
you
enable
mirroring
for
a
file
system,
and
it
has
it's.
It
serves
two
purpose.
One
purpose
is:
is
the
omap
values
in
this
particular
object?
A
Records
which
directories
are
added
for
memory
and
the
other
purpose
for
this
index
object
is
to
talk
to
the
mirror
demons,
it's
kind
of
a
keep
alive
between
the
keep
alive
handshake
between
the
manager,
module
and
the
mirror
demons.
A
So,
if
you
are
aware
of
radar's
watch
notify
what
what
can
be
done,
which
which,
with
what
watch
notify
in
cell
a
client
can
establish
a
watch
on
a
particular
redox
object
and
you
can
have
multiple
clients
establishing
watch
on
that
particular
on
an
object
and
some
other
entity.
Some
other
client
can
send
and
notify
on
that
particular
object.
So
all
the
watchers
who
are
watching
that
object
get
notified.
A
They
get
ping
that
you
know
you
have
someone
so
there's
an
message
that
can
be
handled,
so
it's
kind
of
a
broadcast
which
so
in
this
case
you
have
this
index
object,
which
is
ffs
underscore
mirror
and
the
e
the
mirror
demons
when
they
spawn
they
establish
a
watch
on
this
particular
index
object
and
the
manager
module
periodically
sends
a
notify
on
that
particular
object,
which
means
all
the
email
demons
get
these
get
this
particular
message
and
they
reply
back
and
they
add
back
that
message
and
the
in
the
act
in
the
payload
of
the
act.
A
It
has
certain
information
about.
You
know
the
payload
has
some
information
about
the
militant
such
as
its
address
and
things
like
that
and
the
instance
id,
which
has
which
is
internally
used
to
kind
of
as
an
id
for
a
daemon
and
we'll
see
how
that's
used
internally.
But
the
essence
is,
you
know,
using
this
watch
notify
broadcast
the
manager
modules
the
manager
module
knows
about
how
many
mirror
demon
instances
are
running.
A
So
what's
so,
that's
done
in
the
instance.
So
we
have
this
instance
watcher
thing,
which
is
nothing,
but
you
know
if
you
see
bootstraps
by
doing
a
notified,
so
it's
just
an
loop.
What
does
it
do
is
so
we
all
we
do.
We
try
to
do
everything
asynchronously
if
possible,
if,
if
not,
we
kind
of
spawn
up
threads
to
do
that,
but
in
this
case
you
know
we
we
added
the
a.
I
o
notify
a
call
to
radas
on
the
python
binding.
A
You
know
so
once
you
do
this,
a
notify
on
mirror,
object,
prefix
and
object.
Prefix
is
nothing
but
self-ffs
on
this
computer.
So
we
have
this
well
named.
You
know,
object
that
the
mirror
demons
know
and
the
manager
modules
know.
So
the
submariner
demons
will
establish
a
watch
on
it
and
be
kind
of
to
the
node.
If
we
kind
of
do
notify
once
that's
done,
you
know
if
there
are
five,
if
we
have
five
million
demons
running
all
five
of
them.
A
Kind
of
you
know,
reply
back
and
this
ai
notify
and
the
the
last
parameter
is
the
handle
notify,
which
is
the
callback
which
gets
called
when
you
get
when
we
get
axe
back
from
the
from
the
watcher.
Once
that's
done,
you
know
we.
So
we
what
we
get
is
a
list
which
is
a
list
of
acknowledgements
and
a
list
of
timeouts.
A
So
it's
possible
that
some
of
the
mere
demons
you
know
for
some
reason.
You
know
there
was
a
network
hiccup
or
you
know
the
demons
didn't
reply
or
whatever,
so
the
one,
the
the
mediums
that
replied,
the
the
those
the
those
information
which
is
like
the
daemon
id.
This
is
the
instance
id
and
and
the
payload
ends
up
in
ack,
which
is
a
list,
so
you
can
have
obviously
more
than
that
and
timeouts
timos
is
which
of
the
instances
have
time
out
yeah.
A
So
what
we
do
is
we
just
you
know,
go
through
it
figure
out,
so
we
have
a
notion
of
you
know.
Instance
timeout.
So
if
you
know
we,
we
keep
on
doing
this
ai
95
broadcast,
but
if,
for
some
reason
we
we
have
seen
a
mirror
demon
earlier,
but
that
has
not
responded,
for
you
know
instance
timeout
seconds,
which
is
like
30
seconds.
A
We
consider
to
be
dead,
so
we
consider
that
mere
demon
to
be
dead,
we
go
and
blacklist
it
block
listed,
you
know,
and
and
and
it's
possible
that
some
of
the
directories
would
would
have
been
assigned
to
that
demon.
So
we
go
and
shuffle
those
directories
across
other
demons.
So
we'll
come
to
that
the
etch
a
part,
but
this
is
one.
A
This
is
the
way
we
actually,
you
know,
discover
how
many
the
running
demons
so
where
all
this
gets
tied
up
is
in
the
main
snapshot
mirror
source
where
we
have
this
instance
watcher.
A
So
it's
set
up
instance
watcher
and-
and
you
know,
once
you
you're
set
up
to
create
an
object.
You
know
the
it
starts
in
the
background,
so
so
it's
it
continuously
do
as
I
every
second
and
then
you
know
we,
we
provided
a
a
a
listener,
so
it
gives
back
saying
that
you
know
these
are
the
instances
that
have
been
added,
and
these
are.
A
These
are
the
instances
that
have
been
have
been
removed
and
we
go,
and
you
know
once
we
get
the
added
and
the
removed
instances,
we
can
then
construct
a
map
of
you
know
how
many
mirror
demons
are
running,
and
you
know
what
directory
should
be
around
assigned
to
what
meridian,
okay,
so
yeah.
A
We
will
see
how
this
ties
in
into
the
internal
machinery,
but
before
doing
that
I'll
also
talk
about
how
directories
are
mapped
to
mirror
daemons
and
the
the
the
way
in
which
that's
done
using
a
state
machine.
A
Here,
okay,
so
once
we
once
you
want
to
add
a
directory
for
mirroring,
we
see
do
some
basic
checks.
Like
you
know,
what
is
the
file
system
mirrored
and
things
like
that
and
once
we
you
have
done
those
basic
checks.
We
end
up
here:
okay,
don't
yeah.
So
this
purging
thing
is
something
internal
when
you're
removing
directories,
but
the
main
thing
to
see
is
we
have
something
called
as
a
policy,
so
the
policy
defines
which
how
directories
get
mapped
to
mirrored
instances
or
mirror
demons.
A
So
right
now
we
support
a
simple
m-by-n
policy.
It's
like,
if
you
have
m
directories
and
n
mirror
daemons.
So
you
kind
of
you
know,
do
a
simple
m
by
n
and
just
assign
you
know
if
you
have
say
hundred
directories
and
five
million
demons,
so
each
of
them
get
20
directories
to
handle
so,
but
to
figure
out
so
to
actually
map
a
particular
directory
to
a
particular
mirror
daemon
that
so
the
whole
flow
is
kind
of
tied
into
a
state
machine.
A
So
we'll
see
that,
but
before
that,
let's
go
into
a
policy.
So
we
have
this
in
their
map,
which
kind
of
tells
you
that
we
need
mapping
directories
and
we
have
a
policy
in
there
so
add
directory.
So
how
all
this
ties
up
is.
As
I
said,
it's
a
state
machine.
We
have
a
state
transition
here,
not
sure
if
this
is
visible.
A
Yeah,
it's
okay!
For
me:
okay,
fine,
yeah!
Well,
you
can
open
this
source
and
see
it
yourself.
Yeah
I'll,
just
go
through
it.
Okay,
so
we
have
this
state
machine
that
kind
of
governs
how
you
know
a
particular
action
should
be
taken
on
a
directory.
A
So
let's
do
this:
let's
see
a
simplistic
case
of
adding
a
directory
once
you're
adding
a
directory
for
it's
a
fresh
directory.
So
it's
it's
kind
of
the
mirror
demon
has
so
the
the
manager
modules
is
not
seen
it
earlier,
which
means
it
has
not
mapped
it
earlier.
So
it's
kind
of
fresh.
So
once
you
add
a
directory,
you
know
you
you,
you
follow
so
from
line
this
so
where
you
see
state
associating
up
till
this.
A
So
this
is
the
state
machine
that
handles
stage
machine
part
that
handles
the
that
governs
how
a
directory
is
assigned
to
a
particular
mirror
demon.
So
if
I,
so,
if
you
see
this
state
machine
state
transition
table,
it's
nothing
but
a
map
between
something
called
as
a
transition
key
and
the
actual
transition
that
needs
to
be
done
so
a
transition
key
is
nothing,
but
you
know
the
action
which
is
like
what
we
need
to
do
action
type
which
is
like
do
we
need
to
update
the
map?
A
Do
we
need
to
remove
it
from
the
map
and
acquire
release
I'll
come
to
acquire
release
later?
That's
kind
of
tied
in
with
the
air
notify
thing,
but
yeah.
So
we
have
states
and
we
have
actions
and
policy
action.
So
so
before
going
to
state
mission
details,
let
me
quickly
go
through
this,
so
we
have
a
policy
action
which
is
like
you.
Can
either
map
unmap
or
remove
a
directory
which
is
like
so
mapping
action
policy
action
type
map
is
like
assign
a
particular
directory
to
a
particular
mirror.
Demon
instance.
A
Unmap
is
the
reverse,
which
is
like
you.
You
need
to
kind
of
disassociate
so
remove
it
from
the
map
and
remove
is
kind
of
you
know
you
need
to
actually
go
and
remove
it,
which
means
the
on.
So
we
maintain
it
on
this.
As
I
said,
this
ffs
mirror
object,
has
omap
key
values
which
contains
the
list
of
directories
and
the
the
key
is
the
directory
name
prefix
by
an
identifier,
and
the
value
is
the
actual
contains
a
blob
of
you
know,
which
demon
instance
is.
A
Is
this
particular
directory
assigned
to
welcome
to
that?
So
this
is
policy.
Then
we
have
action
type.
So
once
we
go
into
the
state
machine,
you'll
see
you
know,
once
you
have
mapped
a
particular
directory
to
a
mirror
instance,
you
need
to
kind
of
save
that
particular
map
on
disk.
So
that's
kind
of
you
know
the
map
update
part
the
map
remove
is
the
reverse.
A
It's
like
when
you're
disassociating
a
particular
directory
from
a
mirror
instance.
You
need
to
remove
that
particular
map
from
the
omap
key
values.
That's
map
remove
acquire
release,
so
these
acquired
release
are
messages
that
are
sent
to
the
mirror
demon
and
this
uses
ao
notify
again
and
that's
so.
The
acquire
release
messages
do
not
go
through
the
ffs
mirror
object
because,
as
I
said,
you
know
that
was
a
that.
So
there,
the
middle
demons
are
listening.
A
Are
kind
of
watching
that
particular
object
if
I
s
so,
if,
if
you
want
to
talk
to
a
particular
a
particular
mirror,
daemon
going
through
cfs
mirror
object
or
notifying
through
surface
mirror,
object
doesn't
work
because
that
kind
of
broadcasts
it
to
all
the
demons,
so
we
have
per
daemon
index
object,
so
that
is
like
so
that
object
is
ffs,
underscore
mirror
dot
and
instance
id
okay,
so
the
only
demon,
so
only
one
daemon
is
listening
on
that
particular
object
because
it's
private
to
that
object.
A
So
the
instance
id
is
the
radar's
id
that's
used
by
the
mirror
daemon,
and
how
does
the
manager
module
know
about
the
instance
id
well?
If
you
see
the
instance
watcher
class,
which
I
just
went
to
here,
the
payload
that
is
that
is
sent
by
the
mirror
demons
and
respond
to
the
air
notify
contains
the
the
the
id
it's
using.
So
it's
the
same
id
that's
used
as
the
suffix
in
the
ffs
mirror
dot
private
index
object
for
that
particular
mirror
demon.
A
So
acquire
release
is,
is
not
broadcast.
It's
only
for
a
particular
demon,
so
the
idea
is
to
acquire
is
like
assigning
a
message
that
says
and
that
is
sent
by
the
manager
module
to
the
mirror.
A
So
yeah,
okay,
so
that's
the
action
type.
Then
we
have
state.
So
when
you
ask
you
when
you're,
adding
or
removing
a
directory,
it
goes
through
a
bunch
of
states.
You
know
it
starts
with
like
associating
state
which
is
like
you
know
when
you
freshly
add
a
directory,
so
the
daemon
is
kind
of
so
the
manager
model
is
kind
of
figuring
out
that
you
know
now.
A
This
is
a
fresh
directory
needs
to
be
associated
to
a
particular
mirror
instance,
once
it's
kind
of
so
that's
data
associating
and
once
you
know
you
have
sent
and
acquire
once
you
have
done
a
update
map.
You
know,
because
you
will
see
the
state
machine.
You
know
when
you're
adding
a
directory.
You
need
to
first
figure
out
where
the
directory
is
yeah.
Where
should
the
directory
be
assigned
which
mirror
daemon?
Should
the
director
be
assigned
to
based
on
the
m-by-n
policy?
A
Once
that's
done,
you
do
a
map
update,
it's
like
storing
the
the
key
value
pair
on
disk.
It
says
now
this
particular
directory
is
assigned
to
this
particular
daemon.
Then
you
go
and
send
an
acquire
message
to
that
particular
daemon,
and
once
that's
done,
you
move
the
state
to
associated
so
so
we'll
see
that
here
but
yeah.
So
we
have
this
initializing
state.
The
only
difference
between
initializing
and
associating
is
so.
A
You
know
it's
possible
that
once
a
directory
is
assigned
to
a
particular
mirror
demon,
you
know
we
don't
store
these
states
on
disk.
Okay.
So
once
you
know
the
state
moves
to
associated,
you
know
the
if
the
manager
module
kind,
if
the,
if
there's
a
manager,
restart
right.
So
what
happens
is
during
startup
the
emerging
module
reads:
the
omap
key
value
pairs
and.
A
For
each
for
each
of
those
directories
in
that
list
you
know
there
there's
they
start
with
initializing
state
initializing
is
kind
of
like
it
has
seen
the
particular
directory
earlier
it
has
in
its
record.
It
just
needs
to
verify,
with
the
mirror
demon
that
you
know.
Is
it
still
valid
or
it's
it's
the
same,
acquire
message
that
goes
to
that
particular
to
the
mirror
daemon.
But
it's
kind
of
it's
it's.
If
you
see
the
state
machine,
it's
it's
kind
of
a
short
circuit.
It
doesn't
go
to
through
all
the
states.
A
It's
like
initializing
and
once
we
get
a
knack
back
from
the
from
the
mirror
demon,
we
just
move
it
to
associated
so
we'll
see
that
associating
as
a
shuffling
yeah.
So
shuffling
is
the
interesting
part
which
you
know
we
right
now.
We
don't
support
it,
because
we
only
support
running
a
single
main
demon,
but
the
idea
is,
you
know
once
you
once
you
have
made
demons
coming
and
going
spawning
and
you
know
getting
restarted.
A
We
have
the
state
shuffling,
which
says
that
you
know
we
are
this
this,
which
this
shuffling
state
in
the
state
machine
is
kind
of
like
telling
a
mere
demon
to
release
a
directory
and
then
and
then
sending
an
acquire
message
to
another
demon
to
acquire
that
directory.
We'll
come
to
that.
So
let's
go
to
the
state
machine.
So,
let's
start
with
the
simplistic
case
of
associating
the
once
we
are
associating
a
particular
directory.
A
You
know
so
the
the
actual
policy
code
here
you
know,
takes
the
state
machine
and
drives
the
state
machine
from
one
state
to
another
for
an
event.
Okay,
so
we
start
with
a
state
which
is
like
associating
and
the
current
action
type
is
like.
No,
this
is
just
the
start.
So
what
we
do
is
we
do
a
map
update
coming
to
policy.
A
If
you
see
there
are
a
bunch
of
helper
routines
here
that
you
know
drive
the
state
machine
so
once
you
know
that
you
know
so
so
it's
like
you
know
it
calls
the
state
machine.
Call
that
says
you
know
that
says
hey
this
is
the
current
state
and
there's
the
action
type.
A
What
do
I
know?
What
is
the
next
transition?
Do
I
perform,
and
that
is
all
tied
up
in
here,
but
the
essence
is
if
you
are
actually
mapping.
So
this
is
because
we
are
adding
a
directory,
so
you
need
to
map
it
to
a
particular
instance.
So
we
do
this
map
call,
which
is
the
m
by
and
policy
that
I
was
talking
about
so
which
means
you
know.
We,
though
dead
instances
we'll
come
to
that
later,
but
for
for
the
simplistic
case
we
iterate
to
the
map.
A
So
we
maintain
a
map,
like
instance,
to
their
map,
which
is
like
you
know
the
instance
id
and
the
directories
that
is
assigned
to
it
the
instance
id
is
the
subconscious
mirror
radar's
instance
id.
We
are
trying
to
iterate
that,
and
you
know,
once
we
find
out
which
one
has
the
the
low
you
know,
candidate,
based
on
the
mba
and
policy.
We
we
kind
of
get
that
and
we
assign,
so
we
create
a
state
which
says
you
know
this
a
day
state.
A
This
state
has
just
enough
information
to
kind
of
you
know
the
instance
idea
is
mapped
to
that's
a
kind
of
a
back
pointer.
The
state
stalled
it's
an
internal
state
when
no
mirror
demons
are
available,
so
there's
nothing
to
map
to.
So
we
say
that
this
directory
installed
and
the
actual
transition.
A
So
the
map
thing
here
the
map
function,
returns
or
rather
says
you
know,
maps
a
particular
directory
to
an
instance,
and
once
that's
done,
you
know
once
that's
done
it
so
so
the
the
call
here
is
done
by
the
all
the
state
machine
is
so
the
logic
is
in
the
policy
source.
The
state
machine
is
driven
in
the
snapshot,
mirror
source,
it's
an
asynchronous
state
machine.
So
it's
like
it
it
so
you
have
so
the
stage
machine
being
asynchronous
means,
so
it's
timer
based.
A
So
the
way
it
works
is
you
have
this
delve
dot
their
paths,
which
has
a
list
of
directories
for
which
some
some
action
should
be
taken
depending
on
the
current
state,
so
once
yeah,
so
the
other
parts
of
the
code.
If
you
see
add
directory,
it
just
does
minimal,
and
you
know
appends
that
particular
and
adds
the
directory
to
the
self-doubted
parts
and
the
this
asynchronous
timer
based
state
machine.
A
Every
second,
you
know
it
iterates
through
what's
in
their
paths
and
kind
of
you
know,
for
that
particular
directory
sees
the
calls
into
policy,
so
the
policy
is
like
knows
what
state
is
it
is
in
and
what
action
needs
to
be
taken
and
it,
and
once
once
it
knows
what
action
to
be
taken
taken,
it
tells
the
caller
that
you
know
this
is
the
action
that
that
is
the
next
action
to
be
taken.
A
So
we
saw
that
in
the
state
transition
part
we
have
map,
update
map,
remove
acquire
release
so
for
a
particular
directory.
A
You
know
if
it's
being
added,
you
know
if
it's
a
fresh
ad
first
time
ad,
so
it
does
a
map
update,
so
map
update
is
like
it
saves
that
particular
directory
key
value
omap
in
the
in
the
cipher
fs
mirror
omap
thing
I'll
go
through
that
in
a
minute.
But
the
idea
is,
you
know,
so
there
can
be
some
directories
that
needs
map
updates.
A
Some
directories
that
need
map
removal
for
some
directories
you
need
to
send
acquire
and
for
the
others
release
so
for
updates
for
map
updates
and
map
removals.
We
call
this
update
mapping.
That
is
an
asynchronous
call,
or
rather
it's
kind
of
you
know
it's
it's.
It's
kind
of
uses
the
async
api
radas
api
to
update
and
remove
the
oma
values
from
that
particular
object,
and
once
that's
done
is
called
this
callback,
we'll
we'll
see
what
happens
there,
but
for
other
directories
which
need
acquire
and
release.
A
So
we
know,
which
instance
id
cfs
mirror
instance.
Id
needs
to
be
send
the
acquire
and
release
messages,
because
the
the
policy
already
maintains
that
you
know
this
directory
is
assigned
to
this
particular
instance.
So
we
just
look
it
up
which
ends
up
in
lookup
info
and
using
that
instance
id
instance
id.
We
can
send
the
acquire
and
release
message
here.
A
So
how
that's
done
is,
if
you
recall
so
there's
this
private
index
object,
which
is
like
suffix
by
the
instance
id
so,
and
so
once
you
say
this
is
you
know
notifier
this
particular
directory.
A
You
know
with
this
message,
so
you
so
this
message
is
nothing,
but
you
know
it
can
be
of
acquire
type
or
release
type.
You
know
the
mode
actually
tells
whether
this
directory
needs
to
be
added
in
so
in
the
queue
by
the
mirror
demon
or
you
know,
if
it's
already
there,
you
know
just
don't
you
know
back
off
and
not
bother
about
it,
because
maybe
this
is
getting
reassigned
to
some
other
militant.
A
We
have
this
acquire
and
release
messages
send
to
a
particular
mere
demon
instance,
and
that's
done
here.
If
you
see
you
extract
the
extract
the
instance
id
and
you
create
this
object.
Name
instance
object,
which
is
nothing,
but
you
know
the
mirror
object,
prefix,
which
is
suffix
underscore
mirror.
A
You
know
suffix
by
the
instance
id
so
message
message
that
is
sent
to
this
particular
object.
Using
the
ir.
Notify
is
sent
to
only
one
particular
mere
demon
instance,
and
you
know
that
acts
back,
and
you
know
once
that
acts
back.
You
have
this
prac,
you
know.
A
A
So
if
you
see
what
happened
is
when
you
added
directory,
depending
on
the
state
transition
here,
we
first
did
a
map
update,
which
is
like
mapping.
First,
you
map
it
in
memory,
saying
that
this
directory
is
assigned
to
this
particular
instance
id
daemon
instance
id.
A
So
the
first
step
is
to
map
update,
which
means
record
it
on
disk,
which
happens
here,
update
mapping,
so
we'll
go
to
that,
but
you
can
think
of
this
as
just
doing
a
rados
call
that
sets
the
key
value
pairs
once
that
is
done.
You
know
the
the
state
machine
now
needs
to
be
transitioned
to
the
next
state.
For
that
what
happens?
Is
you
have
this
callback?
A
So
since
this
is
an
asynchronous
call,
once
you
have
done
all
these
updates
and
removals,
this
callback
gets
invoked,
continue
action,
so
it
has
a
bunch
of
updates
and
removals.
So
what
we
do
is
iterate,
sorry,
where
is
it
yeah?
I
I
tried
through
a
list
of
updates
and
you
know,
collect
what
directories
are
were
scheduled
for
updates
and
again
what
directories
were
scheduled
for
removals
and
then
do
a
schedule,
action
schedule.
Action
is
pretty
simple.
A
All
it
does
is
just
add.
These
directories
to
self
dot,
ear
parts
which,
if
you
recall
the
asynchronous.
A
A
So
now
the
state
is
a
map
update
and
it's
still
associating
so
the
next
state
is
acquire.
So
you
see
how
it's
going
right.
You
first
store
the
instance
map
on
disk
and
then
then
here,
if
you
see
it's
acquired
so
now,
the
state
machine
here
will
send
an
acquire
message
to
that
particular
mirror
demon
and
once
it
yeah
okay.
So
let's
talk
about
what
happens
here
right
so
once
you
notify
you
call
this
notifier
with
saying
that
you
know
this.
Is
their
path,
send
this
message
to
that
particular
instance:
surface.
A
Yeah,
so
this
gets
called
when
we
actually
receive
a
knack
from
the
cfs
mirror
demons.
A
You
know
we
say
we
again,
you
know
drive
that
to
the
state
machine
continue
action
which
again,
you
know,
does
the
same
thing
you
know
goes
into
updates.
You
know,
so
the
updates
here
is
just
that
particular
directory
path
that
was
from
that
was
that
we
recently
sent
an
acquire
message
to
a
particular
demon.
Again
that
gets
added
to
the
list
of
their
paths
and
that's
driven
again
so
process
updates
yeah.
A
A
You
were
in
associating,
you
were
in,
you
got
an
acquire
message
and
you
transitioned
to
final
state
associated,
which
means
now
this
particular
directory
has
been
associated
to
that
particular
instance.
Not
only
that
it's
associated
in
the
sense
the
mirror
demon
knows
about
it
and
is
handling
it
and
the
manager
module
knows
about
it
by
way
of
the
on
disk
omap
key
value
pair.
So
the
key
is
the
directory
path
with
with
a
prefix
and
the
value
is
a
blob.
A
As
I
said,
it
contains
the
instance
id
instance
id
which,
for
which
this
directory
has
mapped
to
these
separate
mirror
instance
id.
So
you
can
see
here,
you
know
what
happens
if
the
manager
restarts,
so
this
whole
directory
set
is
loaded
again
by
reading
the
omap
in
the
in
this
particular
object,
and
then
now
we
started
with
initializing
state,
which
is,
as
I
said,
you
know
it
doesn't
go
through
the
whole
update
map
thing.
A
It
just
says
you
know
initializing,
it
just
sends
a
required
message
and
then
once
it
gets
an
app
for
the
acquire
message
again
moves
it
towards
associated
state.
A
So
that's
the
state,
machine
and
driver
here
you
can
follow
the
other
parts
such
as
disassociating.
You
know
it's
the
just
the
reverse
you
may
we
can
quickly
go
through
it.
You
want
to
disassociate
a
directory
from
a
mirror
instance.
The
first
thing
you
have
to
do
is
tell
the
mirror
demon
to
to
do.
You
know
back
off
replicating
that
particular
synchronizing,
that
particular
diagram
snapshot.
So
we
send
a
release
message.
A
And
then
we
do
an
unmapped,
so
we
have
this
start
policy,
action
and
policy
action.
You
know
internally,
we
do
all
these
jugglery
around,
but
once
it
gets
a
release
ack,
you
know
it
doesn't.
Unmap
unmap
is
like.
Is
it
just
removes
it
disassociates
disassociates
it
the
directory
from
the
instance
in
the
internal
memory
map
it's
using
once
that
is
done.
You
know
your
you
just
do
a
remove
so
map
remove.
A
Again,
as
I
said,
you
know
it,
it
removes
the
on
disk
omap
key
value
for
that
particular
directory,
and
once
that's
done,
you're
done
right.
You
are
kind
of
unassociated.
Now
move
this
to
one
associated
state
which
is
like
you
know
now
this
we
no
one
knows
about
this
directory.
We
have
removed
it
from
a
map.
Cffs
made
even
has
kind
of
you
know,
backed
off
synchronizing
those
snapshots,
so
shuffling
yeah.
So
shuffling
you
can
see.
You
know
what
it
is.
A
It's
pretty
simple,
so
we
need
to
kind
of
telemediment
to
stop
replicating
snapshots,
for
that
directory
is
like
sending
a
release
message
then
kind
of
mapping
it
to
another
instance
and
sending
an
acquire
message.
But
before
that
you
do
a
release,
so
the
so
the
so
the
essence
is
you
do
a
release,
then
do
a
map
update
and
then
acquire
so
you
can
see
here
right.
We
started
with
doing
a
release
unmap
it.
A
So
you
remove
it
from
memory,
but
you
don't
do
a
map
remove
here
as
a
store
as
we
did
it
in
the
disassociating
associating
state
we
after
mapping
it
from
memory.
We
do
a
map,
it's
like
it
finds
a
new
instance
mirror
demon
instance
to
a
map.
To
once.
That's
done.
Do
a
map
update
so
earlier
it
was
like
mapped
one
through
three
four.
Now
it's
like
five
six
seven.
So
we
store
that
we
update
the
on
disk
map
and
once
that's
done,
we
send
an
acquire
to
the
new
instance
id
that
has.
A
Minor
details,
like
you
know,
once
a
directory
is
added,
you
know
in
in
add
there
you
know
we
could
actually
just
go
ahead
and
schedule
that
particular
directory
for
state
machine
updates
by
updating
the
list
of
directories
self
dot
their
state
whatever.
What
what
this
was
called
self
dot?
A
Dear
paths,
but
you
know
it
could
be
that
you
know
you
just
added
it's
still
sitting
in
memory,
but
the
manager
module
goes
down,
but
we
have
act
back
to
the
operator
that
saying
that
you
know
we
are
now
handling
this
directory.
But
if
the
manager
restarts
in
between
you
know
your
kind
of
no
now
the
directories
are
known.
We,
you
know,
nobody
knows
about
it.
So
what
we
do
is
once
you
add
a
directory.
A
We
throw
just
the
minimal
information
on
disk
so
that
you
know,
if
manager
restarts,
we
can
pick
up
the.
We
can
pick
up
the
directory
again
and
start
the
transition
state
transitions.
So
for
that
we
to
create
this.
You
know
blob.
This
is
the
actual
blob
that
gets
stored
in
the
oma
value.
It
has
a
version
version
unused
now,
because
we
don't
it's
it's
just
the
first
version.
There's.
No,
if
we
add
fields
to
it,
we
bump
up
the
version
and
then
handle
accordingly.
A
We
store
an
empty
instance
id
because
we
have
not
yet
mapped
it
and
last
shuffled
the
slash
level
is
basically
used
would
be
used
when
we
do
so,
you
don't
want
to
kind
of
move
directories
around
frequently.
Once
you
know,
once
you
see
a
new
mirror
demon
or
you
one
of
the.
If
so,
if
like
mirror
demons,
keep
joining
again
and
again,
you
know
in
the
sense
you
see
you
know,
so
you
have
start
with
two
mirror
demons.
A
Then
three,
four
in
rapid
intervals,
you
know
you
don't
want
to
kind
of
move.
These
directories
around
so
soon
because
you'll
be
just
be
wasting
time
in
moving
these
directories
rather
than
actually
synchronizing
snapshots.
So
we
use
this
last
shuffle
time.
So
when
we
shuffle
a
directory
or
when
you
assign
a
particular
directory
to
an
instance,
we
record
the
time
when
it
was
added,
and
then
you
know
we
don't.
A
Even
if
it's
you
know
you
you,
even
if,
based
on
the
m
by
n
policy
or
any
other
policy,
you
know
you
can
spread
the
load
across.
We
don't
you
won't
do
it
because
it
was
just
shuffled.
So
it's
just
a
minor
detail.
This
is
unused
right
now
because
we
don't
do
but
yeah
once
we
start
doing
that.
You
know
the
policy
needs
to
take
care
of
all
these
additional
variables
to
decide
when
to
move
a
directory.
A
So
yeah
once
that's
done.
We
update
the
mapping
and
once
we
have
it
on
disk,
which
means
you
know,
even
if
we
restart
we
know
about
that
particular
directory
path.
We
can
you
know
I
can
react
back
saying
that
you
know
this
is
now
scheduled,
but
yeah
everything
is
like
asynchronous.
A
So,
let's
see
we
were
going
to
see
how
this
update
mapping
work.
I
can
quickly
do
that
yeah,
so
yeah
so
update
mapping
takes
you
know.
This
is
the
class
that
gets
called
update.
Mapping
yup,
I
think,
let's
see
dot,
update,
mapping.
A
Okay,
yeah
so
yeah,
since
it's
asynchronous,
we
kind
of
you
know
record
what
is
you
know
we
assign
a
request
id
part
for
that
particular
async
call
yeah.
We
call
this
update.
Their
map
request,
which
is
here,
takes
a
update
map
which
is
like
contains
the
key
value
pairs
and
removals,
which
is
just
the
key
and
how
we
do
it
is
we
don't
load
the
osd
with
you
know
thousands
of
ops,
we
kind
of
slice
them.
So
we
have
this.
We
do
it
in
slices.
A
So
if
you
have
like
thousands
of
ops
to
be
updated,
or
rather
thousands
of
updates
to
be
applied
for
that
particular
object,
the
omap
key
values,
you
kind
of,
do
it
every
two,
you
kind
of
send
256
ops
at
a
time
to
the
usd
yeah.
This
is,
I
kind
of
learned
it.
You
know
when
I
was
doing
rbd
stuff,
so
you
know,
if
you
just
send
like
thousands
and
thousands
of
hops
to
sd,
you
get
the
slow
of
thing
and
all
kinds
of
warnings,
so
we
kind
of
do
so.
A
The
recommended
way
was
to
do
this
in
slices.
We
do
the
same.
If
you
see
it's
kind
of
a
mini
state
machine
again,
so
you
slice
it
right,
so
you
gather
the
getter
updates
and
gather
deletes.
You
know
again,
it
uses
the
you
know.
This
uses
the
aio
api
rados
api
to
do
omap
updates,
and
once
that's
done,
you
know
this
call
is
done.
You
call
handle,
update,
handle,
update
again
calls
into
send
update
again,
which
is
like
process
the
next
256
updates.
A
If
nothing
is
there,
you
just
you
know,
you're
done,
and
you
call
finish
finish
is
like
you
know
it
calls
the
unfinished
callback
on
finish.
Callback
is
nothing
but
the
async
callback
here
we
provide.
So
the
idea
is
to
kind
of
you
know
once
that
is
done
once
you
do
a
bunch
of
four
map
updates.
You
call
into
the
state
machine
again,
which
kind
of
you
know,
which
means
that
you
know
once
you
do
an
up.
A
Do
an
odd
base,
update,
gather
the
list
of
directories
and
then
put
it
in
the
their
their
path
list
again,
so
that
the
state
machine
can
process
it
again:
okay,
yeah!
So
that's!
Oh,
it's
done
yeah!
Okay!
Then
we
are
coming
back
to
how
instances
are
handled
right,
so
we
had
this
instance
watcher
that
notifies
us
of
new
instances
and
instances
which
might
not
exist.
A
So
that's
again,
you
know
once
you
figure
out
once
this
thing
figures
out
what
are
new
instances
and
old
instances,
kinds
of
calls
into
a
listener.
So
we
have
this
instance
listener.
That's
provided
sorry
instance.
Listener
yeah,
so
instance
listener
it
calls.
This
handle
instances
from
here
gets
a
list
of
instances
that
have
been
added
and
removed.
A
So
we
call
in
to
update
instances
again.
You
know
what
we
do
is
for
instances
that
have
been
removed.
We
block
list
those
instances
says
blockley
is
nothing
but
doing
a
one
call.
I
guess
yeah,
you
do
this
osd
block
list
block
list
top
and
the
adder
block
list
the
instances
that
have
been
removed,
because
it
might
be
the
case
that
there's
a
network
hiccup
and
the
mirror
demon
is
not
necessarily
down
offline.
A
So
one
once
we
block
list
its
address
demo
daemon
figures
out
that
you
know
it
has
been
block
listed
and
then
it
goes
and
backs
off
again
yeah
for
for
the
instances
that
have
been
added
again,
we
store
them
on
sorry.
Where
do
we
go
update,
princess
yep
for
instances
that
have
been
added?
We
stored
the
instance
id
on
disk
and
again,
once
that's
done,
we
send
this
update
instance
request.
I
think
this
is
again
in
the
update
thing.
Yeah
update
instance
request
same
thing.
A
It
takes
a
bunch
of
instances
that
have
been
added
and
removed
and
again
you
know
using
that
slicing
for
256,
there's
256
updates
at
a
time
and
once
that's
done
x
back
so
we
call
this
handle
update
instances
once
that's
done
and
up
now.
A
Little
update
instances
again,
you
know
we
have
these
instances
that
have
been
added
and
removed.
So
what
we
do
is
we
again
do
you
know
extract
out
what
directories
have
been
mapped
to
that
particular
instances?
You
know
if
it
was
removed.
We
need
to
kind
of
move
all
the
directories
that
have
been
directly
passed
that
have
been
assigned
to
that
particular
instance.
To
other
instances,
there's
not
a
shaft.
A
This
is
not
shuffling,
because
you
know
the
instance
no
longer
exists,
so
we
need
to
move
everything
to
other
instance
and
for
those
that
have
been
yeah
once
once.
You
have
added
a
new
instance
recorded
on
disk
call
this
you,
you
check
the
policy
again
and
call
this
add
instances.
This
is
where
shuffling
happens.
Actually,
so,
once
a
new
instance
is
seen,
you
know
we
figure
out.
A
You
know
how
many
instances
we
have
now
how
many
directories
if
we
have
and
try
to
spread
the
load
across,
and
we
return
a
bunch
of
directories
that
need
shuffle.
So
again,
if
you
see
you
know,
we
have
added
those
directories
back
to
the
list
for
which
the
asynchronous
state
machine
needs
to
be
done,
and
in
this
case
you
know
we'll
we'll
say:
no,
you
know
it's
it's
so
we
saw
the
action
type.
A
As
you
know
remember
this
we
we
saw
it
was
now
the
state,
as
so
we
set
the
state
for
those
directories
that
need
shuffling
as
shuffling,
and
you
know
the
state
machine
driver
again.
Does
the
same
thing
follows
this
release
from
the
old
instance
id
unmap
it
in
memory
map
again
find
a
new
instance
id.
Do
a
map
update
on
disk
and
then
send
an
acquired
to
the
new
new
instance
cipher
summary
instance,
and
that's
how
shuffling
is
done.
A
Yep,
I
think
that's
pretty
much
it
remove.
There
is
yeah.
We
can
we
again
do
the
same.
You
know
we
kind
of
store
on
disk,
saying
that
we
update
the
map,
saying
that
now
it's
purging,
because
we
just
can't
schedule
an
action
in
memory
and
then
forget
about
it,
because
if
the
manager
model
goes
down,
you
know
restarts
you
know,
we
won't
know
that
this
directory
was
removed.
A
So
we
put
this
purging
identifier,
saying
that
you
know
it's
in
process
of
removal,
so
if
the
management
will
restart
it
sees
this
purging
key
and
then
adjust
the
state
accordingly.
So
if
it
was
like
associating
initializing
earlier
now,
it
moves
it
to
disassociating
state.
I
think
so.
That's
done
here
yep.
If
you
have
purging
you
just
flip
the
state
and
allow
the
state
machine
to
continue.
A
A
A
Thanks
guys
for
attending
we'll
probably
do
a
part
two
pretty
soon
on
this
and
cover
the
actual
daemon
code.
Thanks.