►
From YouTube: CephFS Code Walkthrough: CephFS Mirroring Part 2
Description
Presented by: Venky Shankar
Schedule: https://tracker.ceph.com/projects/ceph/wiki/CephFS_Code_Walkthroughs
A
Again,
yeah
hello
and
welcome
to
this
talk,
follow-up
talk
for
demurrer
daemon,
so
the
letter
we
saw
mostly
the
interfaces
and
how
things
tie
up
into
the
manager
module
the
mirroring
module
and
the
inner
workings
of
the
mirroring
module
yeah.
So
this
talk
is
about
the
daemon
itself,
the
daemon,
which
actually
does
the
synchronization
of
snapshots
between
the
primary
and
the
secondary
cluster
file
systems
in
the
primary
and
the
secondary
cluster,
so
yeah
so
the
so.
A
This
ffs
memory,
daemon,
is
one
per
cluster,
which
means
the
merit
demon
can
handle
multiple
file
systems
since
fms
is
multi-fs
ready,
so
it
made
sense
to
handle
multiple
file
systems,
so
you
need
just
one
mirror
daemon
for
the
entire
shift
cluster
and
the
mirror
daemon
needs
to
be
run
on
the
primary
cluster.
A
So
this
and
is
based
on
a
push
model.
So
the
data
is
pushed
from
the
primary
cluster.
The
snap
data
is
pushed
from
this
primary
cluster
up
to
the
secondary
cluster
file
system
in
the
secondary
cluster
and
the
mirror
daemon
uses
lips
ffs.
So
it's
a
normal
client
application.
A
It
uses
lips
ffs
to
talk
to
both
local
and
the
remote
cluster
while
designing
this,
we
are
thought
about.
You
know
doing
mounts
in
the
primary
on
the
second
and
the
secondary
cluster
so
that
we
can
use
rsync
use
the
default
and
when
they
asking
to
kind
of
only
synchronize
the
diffs
between
two
directories
or
two
data
on
datasets,
but
that's
an
overhead
of
maintaining
the
amounts,
and
you
know
a
lot
of
things.
A
So
it
is
that
you
know
switch
to
using
the
surface
client
library,
lips
ffs
and
we
do
and
we
do
support
incremental
a
synchronization
of
tata.
But
if
right
now
in
pacific,
that's
not
there
in
the
sense,
whatever
is
available
in
pacific
right
now
is
like
a
bulk
copy.
It's
like
transfer
the
entire
thing,
entire
everything
inside
a
snapshot.
So
if
you
are
transferring
another
snapshot,
so
you
delete
the
directory
on
the
remote
file
system
and
then
copy
it
again.
A
So
a
quick
point
about
how
a
quick
point
about
how
you
know
the
snapshots
are
synchronized
for
a
given
snapshot.
Everything
under
the
snapshot
is
copied
over
to
the
remote
file
system
for
that
particular
directory,
and
then
a
snapshot
is
taken
with
that
particular
name
on
the
remote
file
system,
so
that
ensures
its
snapshot.
Names
are
in
sync,
so
yeah.
So
right
now
in
pacific,
as
I
tell
you
know,
it's
a
remote
copy,
but
that
is
not
that
performant
and
it's
slow.
A
So
we
have
a
pr
work
in
progress
that
that
that
switches
to
use
it
based
incremental
incremental
base
synchronization
based
on
inode
m
time
and
z
time.
So
the
code
that
I'll
be
showing
you
today
that
I'll
be
walking
you
through
through
today
is
the
incremental
base
snapshot.
One.
The
the
the
bulk
copy
is
not
much
interesting.
The
only
difference
in
that
is
you
know
it's
a
it's.
A
It's
a
it's
remove
call
followed
by
a
bulk
copy
call,
but
we'll
see
how
the
incremental
incremental
synchronization
is
done
so
yeah.
So,
as
I
said,
you
know,
this
is
it's
a
normal
lips
of
fs
client
library
uses
lip
surface
drop
to
the
local
and
the
remote
cluster,
which
means
for
for
us
for
the
image
element
to
initialize
the
lip
surface
handle
or
to
connect
to
the
cluster.
A
It
either
needs
the
surf
configuration
file
to
be
present
on
the
host
where
it's
running
or
it
needs
the
mon
adder
and
the
mon
address,
and
the
key
ring
and
the
and
the
username
to
connect
to
the
cluster.
So
for
the
primary
cluster.
We
expect
that
you
know
the
these
have
configuration
file
for
the
prime
cluster
is
present
on
is
available
on
the
nodes,
where
the
mirrored
events
are
run
for
the
secondary
cluster.
A
We
have
two
options,
so
the
secondary
cluster
is
is
what
we
call
as
a
pl
to
the
primary
cluster
primary
cluster
file
system,
so
that
so
we
support
two
ways
of
adding
up
here.
One
is,
via
a
pr
add
interface,
and
I
think
I
covered
it
in
the
in
the
in
the
in
the
first
talk.
So
the
pr
ad
requires
the
secondary
clusters
of
configuration
file
to
present
on
the
primary
cluster.
A
You
know
and
the
key
ring
two
to
present
of
the
primary
cluster
now
there's
another
way
to
add
up
here,
which
is
what
is
recommended
and
that's
called
boost.
Bootstrapping
appear
is
like
bootstrapping
appear
on
the
remote
cluster
and
then
importing
it
on
the
primary
cluster,
so
the
bootstrap
create
creates
a
token
which
is
nothing
but
a
kind
of
a
digest.
A
You
know
hex
digest
of
the
mini
the
the
the
the
variables
that
are
required
to
connect
to
a
cluster
which
is
like
the
monitor,
address
the
username
and
the
and
the
and
it's
keyring.
So
that
creates
a
token
and
then
we
use
that
token
to
import
it
to
import
and
to
imp
and
import
the
token
in
the
prime
cluster,
which
is
like
adding
appear
again.
A
But
at
this,
when
you
do
a
bootstrap
based
peer
edition,
you
know
you
don't
require
the
the
remote
clusters
contract
yourself
configuration
file
and
the
key
rings
so
we'll
see
that
how
that's
done
okay
and
yeah
so
for
pacific,
we
only
support
running
a
single
mirror
demon.
You
could
possibly
just
run
multiple
mirror
demons,
you
know,
but
that
is
currently
untested.
A
You
know
good
old,
main.c
source
cc
source.
We
we
do
basic
initialization
here,
just
like
you
know,
creating
a
messenger,
creating
a
one
client
and
doing
some
demonizing
things.
A
That's
probably
the
only
thing
we'll
do
here,
not
nothing
much
interesting
here
so
yeah,
it
all
starts
with.
You
know.
A
Cluster
watcher,
so
if
you
recall
from
my
previous
walkthrough
all
the
peer
peer
changes
and
the
pr
updates
are
stored
in
fs
map
and
even
the
you
know,
what
file
systems
are
mirror
are
have
mirror
ring,
enabled
all
those
details
are
stored
in
fs
map.
A
So
the
first
thing
that
you
know
when
I'm
redeeming
spawns
is,
you
know,
subscribe
to
fsmap
because
it
needs
to
know
you
know
what
what
is
available,
what
file
systems
have
mirroring
enabled
and
what
viewers
are
added,
so
cluster
watcher
is
used
for
that?
It's
nothing.
You
know
it's
pretty
simple.
You
just
subscribe
to
fsmap
and
you
get
notified
when
there's
an
fsmap
update,
which
is
here
so
we
do
a
handle
fs
map
we
come
to
handle
fs
map,
it's
pretty
simplistic.
A
You
just
walk
through
all
the
file
systems
available,
nfs
map-
and
you
know
you
see,
which
has
mirroring
enabled
you
know
if
it's
or
mirroring
disabled,
which
so
it
had
mirroring
enabled
again
earlier
and
now
it's
and
also
and
now
the
mirroring
has
been
disabled
and
you
kind
of
figure
out.
A
You
know
which
file
systems
need
to
be
enable
mirroring
and
which
needs
to
be
disabled,
mirroring
well,
for
what
I
mean
by
enabling
it
you
know
internally,
when
when
the
meridian
sees
you
know,
yfs
map
that
mirroring
needs
to
be
enabled
it
spawns
a
bunch
of
threads
for
those,
you
know
and
adds
beer,
and
you
know
and
and
kicks
off
all
the
synchronization
for
that.
Okay,
so
the
way
on
this
class
is
used
and
most
of
the
classes
here
are
kind
of
tried
to
kept.
A
You
know
kept
so
that
it
can
be
tested
standalone.
You
know,
so
the
way
you
use
cluster
watcher
is
by
a
wrestler
listener,
so
you,
you
know,
provide
a
listener,
object
and
say
you
know.
You
know
when
when
when
something
changed,
which
is
like
when
notify
me
or
call
this,
you
know
virtual
function
when
you
have
mirroring
enabled
call
this
when
you
have
disabled
and
psi
nps
are
removed.
A
Okay.
So
before
going
into
more
details,
you
know,
let's
see
some
data
structures
involved
here,
we'll
go
into
type
s
so
file
system.
Is
you
know
a
service
file
system
which
is
like
just
a
name?
A
string
name,
mirror
demon
uses
a
tuple
of
the
file
system
id
and
the
file
system?
Name?
A
A
You
know
so
when
you
so
it
could
be
that
a
file
system
is,
you
know,
getting
deleted
which
is
like
equivalent
to
disabled
mirroring
for
this
from
meredith's
point
of
view,
and
it's
trying
to
shut
down,
and
you
know
back
off
and
doing
unmount
and
the
file
system
gets
created
again
with
the
same
name.
You
know
so
because
for
that
reason
you
know
we
we
we
not
only.
A
We
do
not
only
use
the
the
file
system
name
as
the
id,
but
it's
like
a
tuple
of
the
file
system
name
and
the
the
id.
So
the
id
is
always,
you
know
unique
for
a
file
system.
So
if
you
delete
a
file
system
and
create
it
with
the
same
name,
it
will
have
a
you
know
a
different
fsid,
so
we
use
that
internally
everywhere.
A
This
is
basically
is
just
a
specification
for
the
file
system.
It's
nothing
but
an
encapsulated
encapsulation
with
an
additional
pool
id
yeah,
so
yeah
back
to
cluster
watcher,
you
know
so
the
subs,
so
the
user
of
cluster
watcher
will
actually
when
it's
like
initializes
cluster
watch
object
will
give
a
listener
and
it
needs
to
kind
of
you
know,
subclass
this
listener
class
and
implement
all
these
virtual
functions.
It's
like
you
know,
so
it
needs
to
implement,
say
what
what
do
I
do
when
mirroring
is
enabled?
A
What
do
I
do
when
mirroring
is
disabled
and
likewise,
for
you
know,
the
implementation
when
ps
are
added
and
views
are
removed.
So
that's
what
cluster
watcher
does?
Let's
see,
if
anything
interesting,
is
there
yeah?
So
if
you
delete
a
file
system,
you
know
we
consider
that,
as
you
know,
mirroring
is
disabled,
so
we
will
try
to
back
off.
If
you
know
we
are
trying
to
synchronize
it,
and
here
we
actually
go
and
call
involves
those
virtual
functions.
A
Used
so
all
this
gets
initialized
when
so,
if
you,
you
would
have
seen
main,
then
we
actually
picked
things
off
by
creating
a
an
object
of
this
mirror
class,
and
then
we
call
in
init.
A
So
this
gets
everything
running
up
and
running
in
it.
You
know,
does
basic
initialization.
We
come
to
service
daemon,
where
we
actually
send
updates
to
the
chef
manager.
You
know
there's
like
a
blob
of
data
which,
which
is
kind
of
like
a
status
for
the
demon,
so
you
can
say
you
know
these,
so
it's
so
blob.
What
we
sent
to
the
to
the
managers
has
like
what
files
systems
are.
We
are
meeting
enabled
and
what
are
the
different
stats?
We
we
just
include
basic
basic
stats.
A
Many
directories
scheduled
for
mirroring
these
and
we,
if
you
encounter
any
failures
if
we
recovered
from
any
failures
and
things
like
that,
so
once
we
are
done
with
all
those
you
know,
we
initialize
the
iman
client
and
that's
pretty
much
it
on
the
let's
see
if
it's
close
to
watch
yeah
and
if
you
would
have
seen
here,
we
do
an
init
and
then
we
run
so
we
block
on
run
run
is
nothing
but
you
know
subscribe
to
cluster
watcher
sorry
subscribe
to
the
fs
map
id
cluster
watcher
and
then
you
know
in
it,
and
then
we
wait
until
we
are
stopped,
which
is
like
a
user
terminate
domination.
A
So
once
we
are
done
with
cluster
watcher,
everything
is
kind
of
asynchronous.
Now
now
we
are
getting
now
we
are
getting
these.
You
know,
callbacks
cluster
watcher
is
invoking
these.
These
virtual
functions,
which
are
callbacks,
and
that
tells
us
whether
mirroring
is
enabled
or
not.
A
Show
you
yeah
so
this
here
we
actually,
you
know
subclass
the
listener
class
from
the
cluster
water
and
we
implement
our
own
implementation,
which
is
like
what
to
do
something
to
do
when
something
is
a
mirroring
is
enabled
disabled.
Your
ad
and
peer
remove
so
yeah.
Let's
quickly,
do
you
know
what
happens
when
mirroring
is
enabled?
A
Okay,
so
once
moving
is
enabled
what
we
do
is
again.
Things
are
kept
pretty
much
asynchronous
here.
So
how
is
this
driven?
You
know
once
you
subscribe
to
cluster
watcher
and
then
block
until
somebody,
you
know,
does
a
termination
we
have
these
callbacks
coming
in,
so
these
callbacks
are
actually
queued.
A
A
Yeah,
so
we
have
this
map
between
the
actions
for
our
file
system
and
the
actions
related
to
the
file
system.
Mirror
action
is
nothing
but
different
types
of
action.
You
know
it
could
be
a
mirror,
enabled
it's
less
like
a
list.
You
know
you
can
say
for
this
particular
file
system.
These
are
the
queues
of
actions
that
need
to
be
performed
and
all
this
is
driven
by
a
timer
object.
A
Okay,
so
let's
quickly
see
that
too,
so
before
that
there
are
a
bunch
of
config
options
that
actually
drive
some
of
these
things.
So
let's
see
that
too
yeah
so
yeah.
So
this
has
been
converted
to
a
yaml
which
is
nice.
So
we
have
this
surface,
mirror
action
update
interval.
So
this
is
like
an
interval
for
driving
the
async
transmitter
actions.
A
So
what
we
do
is
when
we
actually
have
mirroring
enable
for
a
file
system.
We
we.
We
have
these
context
completions
that
we
queue,
so
the
mirror
action
has
something
called
as
an
action
context.
It's
like
a
list
of
context
callbacks,
so
we
queue
these
context.
Callbacks
in
that
particular
list
and
the
the
the
update
the
timer
thread
just
goes
and
just
completes
these
context.
So
all
this
is
again
kept
kept
asynchronous.
A
So,
let's
see
how
animal
medicine
is
done,
which
is
probably
the
most
simplest
one
when
we
get
and
we
have
measuring
enable
for
a
file
system.
We,
you
know
this.
This
context
gets
completed
by
the
by
the
updater
thread
and
that
calls
you
know
just
before
this,
but
it
completely
com
complete
the
context.
So
you
end
up
in
calling
finish
once
that
is
done.
A
We
call
enable
mirroring.
So
we
again
come
back
here.
A
Okay,
so
here
again
you
know
what
what
we
do
is
we
we
just
initialize
something
called
as
an
fs
mirror
class.
So
we
have
this
fs
mirror
class
that
handles
mirroring
for
a
particular
file
system,
and
the
file
system
is
nothing
but
again,
a
tuple
between
the
file
system
name
and
the
file
system
id.
A
So
we
kind
of
initialize
that
and
call
you
know
initialize
that
particular
class
object,
and
once
that
is
done,
we'll
see
how
fs
mirror
is
is
written.
You
know,
but
yeah
once
that
is
done.
You
know
we
call
this
init
and
then
we
once
once
once
the
fs
mirror
initialization
is
done.
It
callbacks
calls
back
here.
So
here
again
things
are
asynchronous,
so
once
we
do
an
init,
we
call
this.
We
provided
a
context
completion
again,
and
that
on
finish,
is
so.
A
If
you
see
on
this
on,
finish
is
nothing,
but
if
you,
if
you
see
the
class
that
we
are
seeing
now.
A
This
one
when
we
actually
enable
mirroring
so
the
context
here
is
this.
So
when
the
context
is
completed,
we
call
this
gets
called
and
we
internally
called
handle
enable
mirroring
so
enable
mirroring,
which
you
know.
We
just
move
that
particular.
If
the
mirroring
failed.
A
We
do
some
things
and
you
know,
and
and
here
we
actually
go
into
service
daemon,
and
you
know
we
see
if,
if
things
are
fail,
we
update
the
service
email
so
that
that
can
be
notified
to
the
self
manager
and
if
we
appear
spending,
we
do
an
ad
pair
so
yeah.
So
if
you
just
follow
this,
you
know
for
every
particular
action,
there's
a
associated
context
class
for
it
which
drives
the
asynchronous
behavior.
A
Likewise
for
discipline
mirroring,
you
know
restart.
If
you
want
to
restart
mirroring,
which
we
think
do
at
certain
times
when
when
for
some
reason
mirroring
has
failed,
we
have
this.
We
restart.
We
try
to
restart
it
every
not
that
frequently
but
yeah.
You
know
we
have
a
config
for
that.
So
when
we
try
to
restart
it,
we
try
to
disable
mirroring
and
then
enable
it
again,
so
that
again
is
driven
internally
by
the
by
the
context
classes
that
we
saw
so
yeah
the
takeover
here
is
you
know
this?
A
All
the
baller
plates
here
are
basically
just
to
drive,
drive
up
a
particular
action
for
a
of
a
file
system
which
has
mirroring
enabled
or
you're
trying
to
disable
or
you're,
trying
to
add
peers.
All
that
is
is
put
up
here
in
this
class.
We
do
some
other
things
such
as.
If
you
see
these
config
options,
we
have,
you
know,
restart,
mirror
on
block
list
and
there's
recently
added.
I
think
restart
me
around
failure,
so
yeah.
A
So
if
you
recall
from
my
earlier
talk,
you
know
it
could
be
that
the
mirror
demons
have
been
blacklisted
block
listed
sorry
by
the
mirroring
module
that
can
happen.
If
there's
a
network
of
hiccup
between
the
mirror
demon
and
a
network
of
which
causes,
you
know
the
manager
module
to
to.
To
think
that
the
you
know,
the
mirror
demon
is
not
responding
to
its
to
its
to
its
notify
messages.
A
You
can.
It
goes
and
block
lists
those
addresses
and
once
block
listed,
you
know
there's
this
again.
You
know
once
block
listed,
we
we
kind
of
restart
mirroring
again
for
those
file
systems
that
have
been
block
listed
like
30
seconds
later
so
yeah.
A
That's
again
done
by
the
updater
timer
thread
here,
which
is
like
this
part.
It's
nothing
much
interesting.
You
know
just
figure
out
from
what
what
what's
the
next
action
that
needs
to
be
performed
and
go
and
complete
the
context
and
takes
care
of
certain
things
like
you
know,
block
listed,
mirror
instances
and
fail,
mirror
instances
and
tries
to
restart
them.
If
you
know
if
it
feels
so
okay,
now
we
come
to
the
think
the
entity
that
actually
does
mirroring.
A
So
we
have
this
fs
mirror
class
that
takes
care
of
mirroring
for
a
particular
file
system.
Kind
of
is
multiplex
for
handling
multiple
peers.
So
right
now
we
support
only
a
thing
adding
a
single
peer,
but
you
know
we
could
probably
support
adding
multiple
peers
too,
so
this
is
multiplex
for
for
for
supporting
multiple
peers
by
the
way
so
yeah.
So
now.
A
The
interesting
thing
here
is
so,
if
you
recall
from
my
earlier
talk,
so
there
are
two
index
objects
that
that's
associated,
that
for
with
a
mere
demon,
or
rather
not
only
the
meridian,
but
the
mirror
demon
and
the
manager
module
mirroring
module.
So
we
have
this
global
index
object
which
lives
which
lives
in
the
metadata
pool
for
a
particular
file
system.
A
A
So
we
use
all
that
here
and
we'll
see
how
that's
done
so
yeah.
So
when
we,
you
know,
kick
things
off
here
we
try
to
connect
to
the
local
cluster.
Well,
the
thing
is
for
for
the
primary
cluster.
We
need
the
ceph
configuration
file
and
the
hearing
for
the
user.
That's
that's!
That's
used
to
that's
used
to
mount
file
systems
in
the
in
the
primary
cluster.
A
Once
that's
done,
we
mount
it
and
you
know
you
know
now
we
do
this
thing,
for
you
know,
ins
uniting
instance
watcher,
and
you
know
so
we
have
concepts
of
so.
We
have
two
things
here.
We
have
an
instance
watcher
and
a
mirroring
watcher.
A
The
instance
watcher
class
is
used
to
handle
notify
messages
for
the
me
for
the
mid
for
the
private
index
object,
which
is
like
the
cipher
sentence:
computer
dot
instance
id,
and
we
have
this
mirroring
watcher
class.
That's
used
to
handle,
notify
mss
for
the
global
index
object.
The
justice
servers
underscore
mirror
object.
A
Okay,
so
how's
this
that
then,
if
you
see
instance
watcher,
you
know
the
crux
of
the
crux
here
is:
you
know
we
we
use
the
radar's
aio
notify
an
aio
watch,
or
rather
does
the
io
watcher
a
watch
api
to
establish
a
watch
on
a
particular
instance.
So
here
again
things
are
kept
asynchronous
and
this
kind
of
a
state
machine.
So
what
we
do
is
first,
we
create
an
instance.
A
So
when
we
create
an
instance,
you
do
this
area
operation
which
is
like
you
know,
like
you
know,
an
object,
write
operation
and
op
dot
create.
So
this
is
like
you
can
stack
multiple
operations
on
a
particular
object
and
then
fire
it
off
with
a
single
aio
operate.
Call
okay.
A
So
here
we
do
an
ai
operator
which
just
creates
so
the
moid
here
is
nothing,
but
if
you
see
the
surface
mirror
dot
the
instance
id
and
that's
kind
of
initialized
here
in
the
constructor
here
instance
of
id
instance
id
is
nothing
but
the
server
object,
which
is
like
server
sunscreen,
mirror
prefixed
suffix
by
the
instance
id
the
router
instance
id.
So
we
go
and
create
that,
and
once
that
is
done,
we
call
register
watcher
register
watch
establishes
a
watch
on
that
particular
object
for
that
we
use
the
watcher
class.
A
Register
watch
does
an
ai.
A
watch
on
that
particular
instance
object.
Likewise
for
the
mirror
watcher,
it
does
the
same
thing,
the
only
difference
being
you
know
it
establishes
a
watch.
It
doesn't
need
to
create
it
because
the
manager
module
creates
this
particular.
When
you
enable
mirroring
on
a
file
system
the
manager
module
it
creates.
A
This
ciphers
underscore
mirror
object
on
the
meta
in
the
metadata
pool,
so
we
need
to
create
that
all
we
need
to
do
is
to
establish
a
watch
on
that,
so
we
call
rest
watch
and
once
that
is
done,
you
know
so
whoever
sends
a
notify
for
via
that
particular
object.
We
get
notified
here
where
we
call
handle
notify
so
for
the
instance
watcher.
A
You
know
the
notification
message.
This
is
not
binary,
you
know
because
we
send
these
from
python,
so
it's
like
ascii
based,
so
we
send
a
json
in
the
as
as
the
notification
payload
and
we
you
know,
the
json
has
basically
two
things
all
right:
the
directory
part
to
operate
on
and
the
mode
so
recall
from
my
earlier
talk
again.
The
manager,
module
kind
of
you
know
sends
these
acquire
and
release
messages
to
the
emerge
demons.
A
So
the
acquire
is
like
assigning
a
particular
directory
to
a
meridian
and
releases
like
you
know,
it
wants
the
it
won't.
It
probably
wants
to
reassign
that
directory
to
another
mirror
demon
and
it
wants
the
mirror
demon
which
is
currently
synchronizing
this
directory
to
back
off
and
not
worry
about
it,
not
worrying
about
the
directory
anymore.
A
So
we
we
have
these.
We
get
these
json
with
the
appropriate
path
and
the
mode.
Once
that's
done
again.
We
are
all
kind
of
listener
based
here,
so
we
have
this
instance
watcher
where
you
know
once
once
we
get
there
on
notification.
We
call
these
virtual
functions,
which
are
kind
of
you
know.
Somebody
has
overwritten
the
the
caller
has
always
has
subclassed
this
and
and
and
overwritten
these
particular
virtual
functions
for
their
implementation.
A
So
we
do
that
here.
You
know.
We
only
support
basic
you
in
acquiring
release
modes
is
like
riding
coining
a
directory
releases
backing
off,
not
to
worry
about
that
directory
anymore,
and
this
is
all
tied
up
in
the
fs
mario
thing.
But
let's
quickly
see
customer.
A
Yeah,
so
we
have
this:
we
we
we
sub,
we
we
kind
of
derive
from
these
from
the
instance
watch
watcher
listener
class.
I
call
it
snap
listener.
A
Okay,
it
should
be
called
directory
listener
because
you're
not
really
listening
on
snapshots
but
yeah,
it's
kind
of
operating
on
snaps
for
the
directory,
so
it's
kind
of
yeah
yeah.
It
doesn't
hurt
to
kind
of
rename
this
as
teresa
to
be
really
consistent
with
what
we
are
doing
typing
based
so
yeah.
We
have
this
listener
class
again.
You
know
we
have
the
implementation,
we
we
we,
we
overwrite
the
virtual
functions
and
we
and
we
call
this-
handle
acquire
direct
when
we
want
to
acquire
a
directory.
A
Otherwise
you
know
likewise
what
to
do
when
a
director
wants
to
be
released.
So
this
is
all
again
tied
up
in
fs,
mirror
class
again
so
yeah.
So
this
is
when
we
need
an
fs
mirror.
So
when
we
actually
initialize
an
fs
mirror
object.
When
you
create
an
object,
I'm
calling
it
you
know.
So
it
does
all
this
it
creates.
The
per
private
index
object,
establishes
a
watch
on.
It
then
establishes
a
watch
on
the
global
object
and
you
know-
and
that's
pretty
much
it.
A
I
guess
so
this
this
can
be
followed
by
a
state
machine,
a
simple
state
machine,
so
yeah.
First,
we
call
init
instance
watcher
that
does
the
init
of
the
instance
what
we
saw
once
it
creates
the
object
and
establish
the
watch.
This
is
the
callback
we
call
mirror
watcher.
A
We
initialize
the
mirror,
whatever
it
is
just
like,
establishes
a
watch
on
the
mirroring
object
once
that
is
done
yeah
once
that
is
done,
we
invoke
the
callback
which
is
like
goes
back
into
the
mirroring
so
life,
and
once
all
that
is
done,
we
are
kind
of
ready
to
handle
any
notifications
from
the
mirroring
module.
A
A
You
know
so
one
of
these
actions
could
be
adding
up
here
appear
is,
you
know,
is
so
we
saw
this
in
fs
map
last
time.
I'm
not
edge.
A
A
A
Let's
see
where
it
is
like
this
can
can't
simplify
it
entry
player.
A
Oh
sorry,
that's
in
p,
replay,
okay,
not
retreats
there,
okay!
So
once
we
have
that
you
know
once
we
add
up
here,
we
call
this
inner
triple
a
which
which
and
which
nothing
is.
But
you
know
initializes
the
creates
an
object
of
the
peer
repair
class,
so
peer
repair,
cla,
replayer
class,
is
where
all
the
you
know,
logic
for
synchronizing,
a
particular
snapshot
to
the
remote
target
is
implemented.
A
So
here's
you
know
this
is
actually
multi-threaded,
so
we
assign
a
bunch
of
threads
for
a
particular
appear
so
right
now
we
support
only
a
single
pier.
So,
if
you
you
know
add
up
here,
add
multiple
p,
as
you
kind
of
insert
here.
You
know
you
probably
would
want
to
remove
this.
You
know
sooner
or
later,
but
yeah
so
again.
Remove
pier
does
nothing,
but
you
know
you
know
whatever
was
initialized
the
peer
repair
class
just
go
and
shut
down
it.
So
nothing
much
in
that
again.
A
So
when
we
get
these
handle
acquired
directory,
we
get
these
callbacks
from
you
know,
instance
watcher
class,
which
tells
us
that
these
this
directory
has
been
now.
You
know
acquired
this
directory
or
released
its
directory
directory
kind
of
everything
is
kind
of
you
know.
You
will
see
the
service
demon
thing
again
and
again
in
between
in
these
places.
This
is
like
updating
a
a
metadata
blob
which
is
managed
by
service
demon,
so
that
you
know
notes
how
many
directories
have
been
added
for
have
been
scheduled
to
this
made.
A
Even
you
know,
likewise,
you
will
see
in
release
it.
Just
you
know,
does
the
opposite,
which
is
like
you
know
it
sets?
Now
we
have
these
many
directories
to
handle
and
the
service
demon
kind
of
periodically.
You
know
pushes
all
these
metadata
blob
to
surfmanager
once
we
have
once
you
know
to
acquire
acquirer
directory.
You
know,
call
this
add
directory
interface
in
the
preplay
and
we'll
see
peer
replayer
in
a
while,
so
yeah
peer
replay,
okay.
A
So
the
interesting
thing
here
is
okay,
so
I
mentioned
that
you
know
for
connecting
for
for
the
mirror
demon
to
talk
to
the
remote
file
system
in
the
remote
cluster.
It
either
needs
self
configuration
file
of
the
remote
cluster
and
the
key
ring
and
things
like
that
or
you
know
we
have
this
bootstrap
thing
you
know
which
so
we
create
a
bootstrap
token
on
the
remote
cluster
and
then
import
it
in
the
primary
cluster.
A
Okay,
so,
let's
see
create
is
nothing
but
you
know
you
you,
we
create
a
token
which
has
so
the
token
would
have
you
know,
v
basics
for
encode
x,
encoded,
but
it
would
have
the
the
cluster
fsid
the
file
system
name,
and
this
needs
to
be
done
on
the
remote
cluster.
It
would
have
the
remote
clusters
fsid
the
fs
name
and
the
user,
so
you
create
a
user
here.
We
use
the
fs
authorize
to
create
a
user,
and
then
we
fetch
the
keying
e,
and
we
have
this
again.
A
Sorry,
okay,
so
it's
just
a
dictionary.
Based
of
you
know,
all
the
minimum
things
that
are
required
to
connect
to
this
particular
cluster
and
file
system
and
the
mon
host
once
that's
done,
we
kind
of
you
know,
create
a
basic
civil
encoded
string
for
that,
and
that
becomes
the
token
and
the
interesting
part
is
when
we
import
it
in
the
primary
cluster.
We
just
decode
the
token
once
that's
done,
we
have
the
required
details,
we
call
into
prad.
A
So
what
what's
done
is
all
these
you
know
the
the
the
the
keyring
and
such
are
kind
of
stored
in
the
mods,
so
it
uses
the
mon
config
store.
For
that,
let's
see,
add:
okay
quickly,
we
do
some
basic
checks.
We
covered
this.
We
kind
of
try
to
set
up.
You
know
an
exciter
on
the
peers,
root
file
systems
root
so
that
you
know
if
it's,
if
two
gun
two
two
clusters
try
to
do,
try
to
do
a
pr
ad
on
the
same
peer.
A
You
know
only
one
succeeds
because
it
uses
that
excited,
create
you
know,
flag
for
the
metal
trading
and
etc
once
that
is
done
yeah.
So
this
is
the
point
where
you
actually
go
and
do
a
current
and
do
you
know
the
so,
for
we
do
a
config
set.
A
Yeah,
so
we
save
these
details
into
the
man
config
store
we,
the
key
here
is
we
have
this
peer,
config
key,
which
is
like
slash
via
configure
peer
file
systems
and
the
pro
uid
so
recall
that
you
know
every
peer
has
a
uuid
associated
with
it,
so
that
becomes
a
distinct
key
for
that
particular
appear
and
we
save
all
these
details.
So
if
you
see
where
config
set
is
done,
we
set
these
config
set
file
system
and
do
a
set
of
the
remote
conf
remote
config.
A
Basically,
we
just
have
the
mon
host
the
user
name,
the
key
and
something
called
as
a
site
name
site
name
is
just
use.
You
know
a
user
string
that
can
be
anything.
That's
used
to
identify
this
particular
remote
site.
It's
not
used
anywhere
internally
in
the
mid
events
or
in
the
manager
module
it's
just
for
the
user
to
make
sense
of
okay.
So
what
we
do
here
is
we
try
to
connect
in
back
to
the
mirroring
module.
So
we
try.
When
we
initialize
the
spree
player,
we
want
to
try.
A
We
try
to
connect
to
the
remote
cluster
and
mount
the
file
system,
so
we
do.
We
try
to
fetch
these
required.
Details
creates
from
the
monster
man
config
store.
You
know
we
do
a
one
command,
get
those
keys
once.
If
then,
if
you
get
something
decode
it
and
then
for
this
connect
interface,
we
provide
the
mon
host
and
this
fx
key.
We,
the
client
name,
you
know
the
the
host
and
this
fxe
and.
A
And
you
know
it's
nothing,
but
but
you
know
once
we
have
those
monos
and
keys,
we
set
them
in.
We
overwrite
those
in
the
in
the
ccd
con
subcontext
cons.
So
here
we
set
the
mon
host
and
then
the
key.
So
with
this
we
do
not
require
the
remote
clusters.
Config
left
config
file
and
the
key
rings
to
be
present
on
the
you
know,
as
you
know,
in
edcsaf
we
don't
require
that
we
do
with
the
bootstrap.
A
We
can
override
that
and
save
all
these
details
in
the
man,
config
store
and
the
self
made
even
reads
it
back
and
uses
it.
A
Okay.
So,
as
I
said
you
know,
every
peer
has
a
specific
number
of
threads
assigned
to
it.
So
you
have
these
multiple
threads,
trying
to
synchronize
snapshots,
folder
directory
a
thread
handles
a
directory.
So
it's
like.
We
have
a
list
of
directories
for
which
the
snapshots
need
to
be
synchronized.
A
Each
thread
is
associated
with
the
directory
instance.
It
tries
to
pick
pick
a
directory
for
which
snapshots
have
been
synchronized,
so
you
can
have
10
directories
and
just
three
threads,
but
each
of
them
will
have
just
just
handle
one.
So
it's
not
we.
A
The
thing
that
has
been
taken
care
is,
you
know
two
or
more
threads
cannot
handle
a
particular
directory
because
in
the
end
we
use
you
know
the
incremental.
Basing
you
know
that
does
snapshot
one
at
a
time
or
even
the
you
know,
old-fashioned,
remove
and
bulk
copy,
just
one
snapshot
at
a
time
if
multiple
threads
were
to
synchronize
snapshots,
you
know
that
would
probably
be
bad.
A
That
would
be
disastrous,
because
multiple
threads
will
be
copying,
snapshot
for
the
same
directories
to
the
remote
file
system,
which
is
not
great,
so
we
do
snapshots
one
at
a
time
for
a
directory.
A
We
are
seeing
this
max
congruent
directory,
syncs,
okay,
first
one
yeah,
so
yeah
we
we
default
s3
would
probably
increase
it
based
on
you
know
how
how
big
your
machines
are
and
how
fast
you
need
synchronization,
but
the
the
main
point
is
we
have
a
bunch
of
worker
threads
for
handling
a
snapshot
synchronization.
A
So
we
have
seen
these
interfaces
or
rather
seen
you
know
the
fs
mirror
class
called
these
interfaces.
The
ad
directory,
just
you
know,
adds
it
in
the
queue
removes.
Just
you
know
is
the
reverse
is
like
you
know,
just
remove
it
from
the
cube,
but
if
synchronization
is
going
on,
you
know
you
ask
it
to
back
off,
you
know,
and
then
we
remove
it
most
of
things.
Here
are
pretty
straightforward.
A
You
know
not.
You
really
need
to
go
into
each
implementation.
Details.
There's
one
thing
that
we
do
here
is:
you
know,
once
a
thread
picks
up
a
directory
for
snapshot.
Synchronization,
we
lock,
we
do
a.
A
We
do
an
f,
f,
lock
on
that
directory
on
the
remote
file
system.
The
the
the
the
reason
we
do.
That
is,
you
know
recall
that
you
know
we
probably
want
to
support
multiple
surface
mirror
daemons
synchronizing
congruently
at
some
point
of
time
so
say
a
particular
directory
path
is
assigned
to
a
mirror
daemon
and
that
some
of
these
threads
are
one
of
these
threads
are
kind
of
actively
synchronizing
direct
snapshots
for
it.
You
know,
another
demon
comes
up
and
the
mirror
demon.
A
You
know
figures
out
that
this
particular
directory
is,
you
know
we
can
probably
shuffle
it
to
the
other
mirror
daemon,
so
that
for
balancing-
and
it
happens
that
this
this
mirrored
image
current-
this
mirror
daemon-
is
for
that
value.
File
system
is
currently
synchronizing
that
particular
directory
snapshots
in
the
directory,
and
you
know
when
these
acquire
and
release
calls.
You
know
they
are
not
blocking
calls
right.
A
So
release
call
may
come
in
and
you
know
internally
when
we
actually
showed
you
the
removal
of
a
directory
or
at
the
release
of
a
directory.
We
act
back
saying
that
you
know
it's
done,
but
internally,
if
a
synchronization
is
going
on,
you
know
it
needs
to
back
off
later
we
do
not
block
before
we
can.
You
know
release
that
particular
we
finish
or
interrupt
synchronizing
that
directory
so
from
from
the
mirroring
model
from
mirroring
module
point
of
view.
A
The
mirror
daemon
has
kind
of
you
know
noted
that
this
directory
needs
to
be
released,
and
now
it
can
go
ahead
and
you
know
assign
it
to
another
mere
demon
and
that
mirror
demon
now
notes
that
you
know
it's
it's
you
know
it
needs
starts
to
it.
It
needs
to
handle
the
snaps
for
that
directory.
A
So
the
f-lock
here
ensures
that
you
know
we
don't
run
into
a
case
where
we
are
backing
off
from
amir
demon
is
backing
off
synchronizing
for
a
directory,
while
the
other
mirror
demon
starts
synchronizing
that
a
directory
so
before
we,
you
know,
scan
snapshots
and
synchronize
them.
You
know
we
take
and
we
take
a.
We
take
a
lock
exclusive,
lock
or
a
non-blocking
lock
on
the
remote
on
the
director
on
the
remote
file
system.
A
So
if
the
other
mirror
demon
were
to,
you
know,
try
to
pick
that
directory
and
try
to
synchronize
it,
but
this
mirror
daemon
is
just
backing
off.
It
wouldn't
have
released
the
lock
or
just
about
to
release
log.
So
we
don't
run
into
a
case
where
you
know
one
mirror.
Demon
is
kind
of
backing
off
completing
a
particular
io
operation
and
the
other
merit
even
kinds
of
is:
does
the
ia
operation
again?
We
don't
know
what
will
happen
so
we
got
all
those
by
an
exclusive
lock
and
it's
non-blocking.
A
So
if
we
they,
if
it,
if
the
flock
all
returns
and
that
we
we
might
block
you
know
we
retry
it
later,
which
kind
of
tells
us
that
you
know
it's
probably
locked
by
another
mirror
demon
and
sooner
or
later
it
will
probably
release
it,
and
then
we
acquire
the
lock
and
do
the
normal
operation
again.
A
Once
that
is
done,
you
know,
that's
that's
the
main
part
yeah.
Then
the
other
interesting
thing
is
we
yeah
before
for
for
a
particular
directory.
You
know
so
the
email
demon
can
identify
snapshots
that
have
been
deleted
and
renamed.
You
know.
So
you
can
have
some
of
these
snaps.
You
can
rename
a
snap,
you
know
and
delete
some
snap
and
you
know,
mirror
demon
will,
can
identify
that
you
know
if
you
rename
a
snap,
it's
not
like
limited
even
would
delete
it
first
and
then
copy
the
again
copy.
A
A
And
with
that,
we
built
a
map
key
being
the
snap
id
and
the
value
being
the
snap
name,
and
we
do
this
for
the
local
and
the
remote
file
system
for
our
directory,
and
we
can
figure
out.
You
know
if,
if,
if
a
snapshot
yeah
so
before
we
build
the
snap,
you
know.
So
how
do
we
know
that
you
know
a
snap
has
been
deleted
because
the
snap
id
is
on
or
rename
because
snap
ids
on
the
remote
file
system
for
a
directory
would
be
different
because
that's
another
cluster.
A
So
what
we
do
is
when
the
mirror
demon
synchronizes
a
snap.
A
A
We
use
the
mk
snap
call
to
create
a
file
system,
so
if,
when
it
has,
you
know
kind
of
synchronize
all
the
data,
we
do
a
sfn
case
snap,
and
these
are
the
things
we
added
for
numerity
when
we
added
a
snap
metadata,
which
is
it's
just
a
free-flowing
key
value
pair,
that
you
can
store
that
that
is
actually
attached
to
a
snapshot.
A
You
know
the
it's
just
like
a
you
know:
free
flowing
key
value
pair
tap
in
the
data,
and
then
we
have
the
snap
in
for
which,
which
is
a
call
you
know
which,
which
can
give
which
it
gives
you,
the
snap
id
and
the
snap
metadata
if
any,
that
was
associated
associated
for
the
directory
for
the
for
the
directory
snapshot
when
creating
a
snapshot.
A
So
what
we
do
is
so
we'll
see
here,
let's,
let's.
A
Okay,
so
what
we
do
is,
while
creating
a
snapshot
once
you
have
transferred
all
the
data,
you
know
you,
you
stole
the
for
for
the
snapshot
that
is
being
transferred,
that
is
being
synchronized.
It
has
a
snap
id
on
the
primary
file
system.
We
stored
this
particular
id
in
the
metadata
on
the
remote
snap
on
the
remote
snap
editor
if
it
had
id2
on
the
primary
file
system.
A
When
we
take
a
snap,
the
snap
on
the
on
the
remote
file
system
can
be
anything
can
be,
10
can
be
12
any
number
but
to
to
to
identify
these
deletes
and
renames
and
to
and
and
and
to
ensure
that
we
start
from
the
correct
snap.
We
store
the
primary
file
system,
snap
id
for
that
snapshot.
There's
like
two
here
on
the
metatara
on
the
remote
snap.
So
when
we
do
this
build
snap,
so
when
we
do
build
snap.
A
So
when
we
build
this
map
map,
you
know
for
the
primary
file
system
when
we
are
building
this
snap
app,
which
is
less
like
a
map
between
so
we'll
see
here.
A
A
When
we
build
these
map
map
you'll
see
it's
just
a
map
between
any
dj
and
a
string,
integer
being
the
snap
id
for
the
primary
file
system.
It's
the
actual
id
the
snap
id
which
is,
which
is
the
idea
of
the
snapshot
for
the
remote
file
system.
A
It's
the
id
from
the
metadata
that
we
stored
so
it
so.
The
id
here
is
not
the
snap
id
of
the
of
of
the
or
the
snap
on
the
remote
file
system
for
that
directory,
but
the
actual
id
of
the
metadata
which
is
id
stored
in
the
metadata,
which
is
nothing
but
the
the
id
of
the
snap
id
of
the
primary
sniper
id
of
the
snapshot
in
the
primary
file
system.
So
once
we
do
that,
we
now
have
this
map.
A
We
can
compare
this
map,
you
know
if
an
id
is
if
a
snap
id
is
missing
from
the
local
snap
map,
but
I
live
in
the
remote
snap
app.
That
means
it's
deleted
if
the
id
is
same,
but
the
name
is
different.
That
means
it
has
been
renamed
so
it
so
we
can
just
do
a
you
know.
A
rename
call
on
the
on
the
remote
file
system
and
read
them
that
particular
snapshot.
A
So
once
we
do
this
snap
map
now
we
need
to
figure
out
so
so
this
is
so
this
code.
What
we
have
here
is
is
the
incremental
base.
Synchronization
that
is
now
under
review
as
appear
the
master
code
is,
would
be
a
bit
different
because
you
know
the
incremental
part
is
not
there
and
we
do
just
the
rmd,
remove
the
contents
and
then
copy
the
whole
thing
again,
but
here
we
have
this
incremental
snap.
So
what
we
do
is
you
know,
based
on
the
snap
map.
A
Looking
at
the
snap
map
we
figure
out
was
the
next
snapshot
to
start
synchronizing
from
that
is
that
you
can
infer
it
from
the
local
and
the
remote
snaps.
Once
that's
done,
we
check.
If
we
could.
Probably
you
know,
choose
incremental
snap
now.
How
is
that
done?
So
what
is
done
is
so
we
have
a
new
exciter
for
this.
Let
me
quickly
do
that
yeah,
so
we
so
in
that
pr
introduce
this
f
dot,
mirror
dot
dirty
snap
id.
A
So
before
we
start
synchronizing
data
for
a
particular
for
a
particular
snapshot.
We
we
we,
we
set
this
exciter
on
the
directory
on
the
remote
file
system.
That
is
the
directory
of
the.
That
is
the
snap
id
of
the
snapshot
that
is
currently
being
transferred.
A
So
initially
when,
when
we
are
transferring
the
first
snap,
this
etc
is
not
available,
which
means
you
know
if
the
exciter
is
not
available,
we
can
we
we
need
to
do
well
copy,
because
there
is
nothing
to
compare
to
which
is
like
equivalent
to
starting
with
the
first
snapshot,
so
that
needs
to
be
able
copy,
so
the
bulk
copy,
before
starting
the
ball
copy,
we'll
do
the
we'll
set
this
exact
on
the
remote
directory,
we'll
cop
we'll
do
the
transfer
and
then
we'll
do
a
snap
when
we
come
back
and
choose
another
snapshot
to
synchronize.
A
We
do
this
comparison
based
on
the
what's
available
as
an
x
as
the
etc
value
in
the
remote
directory,
and
what
are
the
snapshots?
What
is
the
snap
id
we're?
A
Transferring
from
so,
if
there's
a
match
between
the
snap
ids
on
the
remote
directory,
which
is
the
the
value
of
the
exciter
in
the
remove
directory,
if
it
matches
one
of
two,
if
it
matches
the
the
the
so
with
incremental
snapshots,
we
have
what
we
do
is
we'll
have
we
pick
two
snapshots
so
one
that
is
already
being
already
transferred
to
the
remote
and
the
other
that
is
currently
being
transferred,
so
they
will
have
snap
ids.
A
So
if
so,
if
the
excited
value
on
the
remote
directory
matches
one
of
these
snap
ids,
we
can
do
incremental
transfers
based
on
local
comparison,
which
means
we.
We
need
not
compare
the
current
snapshot
data
with
with
the
remote
file
system
data
for
that
for
a
directory.
We
can
infer
it
locally
by
comparing
snapshot
by
comparing
two
snapshots
which
are
in
the
local
file
system,
and
that's
because
the
snap
id
tells
us
that
you
know
the
data
those.
A
So
the
idea
of
the
snap
id
the
the
the
value
of
self
mirror
dirty
snap
id
is:
it
identifies
the
the
the
it
tells
which
the
data
under
this
particular
directory
is
associated
with
what
snap
id,
so
it
could,
if
it's
associated
with
the
snap
id
of
the
older
snap,
it's
fine
because
then
again
we
can
do
local
comparison
between
the
earlier
snap
and
the
next
snap.
A
If
it's
the
so
it
could
happen
that
you
know
we
are
transferring
data
for
a
particular
snapshot
which
means
setting
the
snap
id
for
the
dirty
snap
id,
etc.
And
then
you
know
the
cipher
is
mirror
restarts
once
it
restarts.
We
get
this
snap
id
which
now
matches
the
current
snap
id.
You
can
again
choose.
A
We
can
again
do
incremental
transfers
based
on
local
comparison,
because
the
snap
id
matches
we
know
that
the
data
on
the
remote
is
now
somewhat
incomplete,
but
it's,
but
it
belongs
to
the
the
data
for
the
current
snap
id
the
catch
here
is
we
need
to
ensure
that
the
snap
vr,
the
snapshot
we
are
comparing
from
is
also
you
know
the
the
data
was
transferred
to
the
remote
file
system
based
on.
We
need
to
ensure
that
before
we
before
you
know
the
surface
mirror
demand
was
restarted.
A
For
some
reason,
the
snap
id
the
the
snapshot,
the
data
on
the
remote
file
system
is
is
is
transferred
by
comparing
two
snaps
and
after
restarting
we
are
still
comparing
the
two
snaps
that
needs
to
be
ensured,
because
if
we
choose
a
different
snapshot
now
you
really
can't
trust
the
data
on
the
remote
files
on
the
remote
file
system
directory
yeah.
So
we
do
all
these
text.
All
those
checks
are
based
on
the
dirty
snap,
id
etc
and
the
local
and
the
local
and
remote
snap
maps.
A
So
with
this,
we
can
kind
of
figure
out
that
you
know
if
it's
safe
to
go
ahead
with
incremental
transfers
based
on
local
comparison.
A
If,
for
some
reason
we
cannot
say
we
are
transferring
maps
and
we
are
transferring
a
snap
by
comparing
to
snapshot
x
before,
which
is
a
snapshot
that
was
transferred,
snapshot
x
and
now
snapshot
y,
then
the
server's
mirrored
email
is
restarted
or
the
mirroring
is
disabled
and
then,
when
it
comes
back
again,
you
know
so
it
was
mirroring,
is
enabled
again
the
snapdex
doesn't
exist.
A
Nah
snap
x
doesn't
exist,
so
you
know
if
you're,
comparing
to
the
snapshot
before
that,
so
you
know,
obviously
we
can't
use
incremental
snapshot
based
on
local
comparison.
In
that
case,
we
use
incremental
snapshot
based
on
remote
comparison,
just
like
comparing
the
snapshot
in
the
primary
file
system,
with
the
actual
data
on
the
remote
file
system.
Under
that
directory
uses
the
same
logic.
The
the
logic
is
same.
A
It's
like
comparing
data
between
two
snaps
or
comparing
data
between
a
snap
and
the
data
on
the
remote
file
system
directory.
A
You
know
the
the
logic
is
same
is
just
that
we
use
the
the
the
correct
mount
variable,
which
is
like
either
points
to
the
local
local
file
system,
and
the
path
is
the
local
is
the
last
snapshot
or
it
points
to
the
remote
file
system
and
the
path
is
the
remote
file
system
directory
and
all
these
is
again.
You
know
we
based
on
the
recently
introduced,
or
rather
almost
ready
to
merge
at
calls
based
apis.
A
So
we
kind
of
open
an
file
descriptor
on
the
you
know,
snapshot
directory
and
do
add
base
calls
relative
to
that
with
lefty.
So
if
you're
so-
and
this
is
not
yet
merged-
so
you
know
it
could
happen
that
when
we
are
transferring
a
snapshot,
you
know
somebody
deletes
a
snapshot
created
with
the
same
name
but
entirely
different
contents.
You
know,
if
you
do
path-based
pure
path-based
operation,
we'll
kind
of
you
know
there
might
be
a
case
frame.
A
You
might,
you
know
just
transfer
the
incorrect
data
because
we
are
just
doing
path
based
operation,
but
with
all
these
are
all
fd
based
ad
based
calls.
You
know
you
have
the
the
the
inode
pin
in
memory,
because
we
have
that
every
open-
and
we
do
everything
related
to
that.
A
So
this
yeah
prt
player
is
so
that's
the
most.
You
know
thing
to
be
discussed
about
pre
player.
You
know
the
the
crawling
of
the
file
system
data
is
you
know
it's
nothing
interesting.
We
just
crawl,
do
a
walk
of
the
file
system
and
then
compare
and
sync
yeah.
A
You
know
just
uses
the
manager,
or
rather
the
aps,
provided
by
radars
to
to
to
periodically
update
data,
the
metadata
block
to
self
manager
and
we
kind
of
show-
and
we
have
an
interface
in
the
manager
module
to
kind
of
you
know
pretty
print
the
the
json,
the
blob
which
is
sent
by
these
mirror
demons.
A
I
guess
that's
it
yep.
A
Thanks
guys.
Thanks
for
attending
the
code,
walkthrough.