►
From YouTube: CDS Pacfic: CephFS
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Whatever
you
want,
you
can
feel
free.
Alright,
there
are
just
a
couple
items
here
on
the
list
that
I'm
I
had
sort
of
on
my
list.
I
guess
my
goals
are
go
through.
This
make
sure
everybody
sort
of
understands
what
the
work
is
and
then
we
could
translate
this
to
whatever
is
on
the
treble
board
at
the
end.
But
we
don't
have
to
do
that
during
the
call
for
to
want
to
do
that
later.
A
I
think,
from
my
perspective,
that
the
main
priorities
are
around
multi
FS
and
the
gia
replication,
those
the
two
that
are
sort
of
up
with
mine
for
me
and
then
it's
whatever
else
is
on
your
list.
I
think.
B
Well,
a
lot
of
this
was
just
sort
of
moved
out
based
on
the
trailer
cards
we
had
left
over.
So
if
you
want
to
start
with
multi
FS
I
think
both
yawn
and
thank
you're
here
so
they've
been
working
on
the
snapshot
replication
in
the
manager
and
we've
got
that
leftover
pad
from
a
couple
CTS's
ago.
When
we
were
talking
about
how
to
do
this,
that
mirroring
you
see,
then
get
that
pulled
up.
B
A
C
D
How's
that
yes
yeah,
so
the
snap
scary
stuff
is
still
in
a
PR.
This
was
blocked
for
a
little
while
to
get
some
changes
into
manager,
you
till
to
have
a
surface
connecting
pool
and
since
then
I
haven't,
given
it
too
much
attention
its
rebased
on
this
manager,
you
till
PR
that
was
merged
by
now
PRS.
Here
this
generally
works,
so
the
basic
functionality
is
there.
There
is
some
stuff
to
do
still
around
the
user
interface,
because
there
were
some
changes
were
added
based
on
the
RBD
scheduled
snapshots.
D
A
A
A
A
D
B
A
B
This
would
be
disaster
recovery,
so
we
I
think
we
talked
about
being
like
having
two
file
systems
and
you
would
like
you
could
like
mirror
them
to
each
other.
So
you
might
have
two
sets
of
active
directories
that
were
each
mirrored
on
to
a
passive
like
the
passive
backup
on
the
other
side,
and
then,
if
one
of
them
fails,
you
can
swap
over.
B
So
you
got
a
link
in
there
to
a
PR
which
is
actually
closed.
I
think
you
can
follow
that
down
a
breadcrumb
trail
to
the
next
one
for
the
next
couple
of
them
to
flex
the
arc
stats
on
snapshots
to
make
sure
they're,
accurate
and
up
to
date,
and
then
we
can
just
look
at
those,
and
you
know
wrapper
shell
or
program
around
our
sink
to
follow
them
down
and
see
which
specific
files
we
actually
need
to
look
at.
A
A
A
G
D
One
thing
I
was
I
was
wondering
about
that
mirroring
or
syncing
of
snapshots.
I,
don't
think
we
currently
have
a
way
in
suffice
to
like
think
or
create
a
snapshot
that
isn't
created
through
a
snapshot.
If
that
makes
sense
right,
and
we,
when
we
think
you
know
a
scheduled
snapshot
or
any
snapshot
for
that
matter,
we
probably
want
that
to
be
again
a
snapshot
on
the
remote
system
as
in
immutable
I.
Don't
think
we
have
a
way
of
doing
that
right
now.
Do
we.
A
My
assumption
was
that
we
could
just
create
the
snapshot
on
the
source.
Sometime
later,
the
sync
demon
wakes
up,
it
syncs
it
all
to
the
destination,
and
then,
when
it's
done
syncing
it
it
creates
a
snapshot
with
the
same
name
as
the
source,
so
the
C
times
will
probably
not
match,
but
the
end
time
and
yeah
at
the
end
time
be
the
same.
I
guess
the
C
time.
It
also
the
first
time
a
treatment
whatever
it
would.
They
would
do
the
whole
sync
and
then
creates
a
snapshot
of
the
same
name.
A
A
D
E
B
B
But
if
you
have
a
user
who
has
a
low
readwrite
on
a
filesystem
right,
they
can
read
and
write
everything
we
don't
have
waited
during
the
during
the
clone.
I
mean
you,
you
don't
really
need
anyone,
but
the
daemon
to
have
access
to
it.
So
it's
possible,
you
could
just
have
special
permissions
for
the
daemon
and
then
you
just
don't
allow
other
accesses
until
that
until.
E
You're
ready
Greg's
point:
is
you
don't
have
an
avenue
for
that
and
that's
that's
true
service
doesn't
really
have
a
way
for
us
to
implement
that
we
I
think
the
you
know.
We
could
hide
it
in
some.
You
know
magic
directory,
that's
not
normally
visible,
but
we
could
create
a
magic
directory
that
is
not
normally
presented,
but
I
think
that
would
be
the
the
only
thing
we
can
do
in
the
in
the
short
term.
That
would
be
sufficiently
Center
I.
B
Was
kind
of
more
thinking
like
a
big
hammer,
kind
of
thing
where
we
just
say
we
have
some
sort
of
thing
where
we
just
don't
allow
any
access,
except
from
the
demon.
You
know
until
we're
ready
to
bring
anything
live,
how
we
ever
meant
that
I'm,
not
sure
I,
don't
know,
think
we
were
going
to
use
file
permissions
for
that
we'd
use.
G
A
Okay
is
this
is
I,
think
the
the
hairiest
or
one
of
the
hairiest
part
is
here
is
this.
Our
stuff
blush
is:
is
that
on
John's
list
right
now.
J
J
A
E
H
A
A
B
E
E
A
B
A
A
A
B
E
B
E
A
E
B
B
E
E
You
can
move
things
around
right,
so
you
can
move
something
from
one
subtree
to
another
easily,
so
the
the
contract
so
far.
First
scrub
has
been
that
everything
that
existed
in
the
file
system
and
isn't
changed
as
scrubbed,
and
so
while
it
would
be
I
guess,
okay
for
first
not
to
scrub
metadata
that
moves
across
the
subtrees
during
the
during
the
operation.
I
think
there's
a
lot
of
potential
for
that
to
happen
at
us
to
miss
a
whole
bunch
of
metadata.
A
E
J
J
J
A
C
C
So
once
that's
done
probably
needs
testing,
and
if
anybody
else
wants
to
your
on
off
review,
that
would
be
good,
but
most
of
the
stuff
is
there
depending
thing
is
we
need
to
write
a
kind
of
CLI
wrapper
or
the
marriage
memorial
commands
to
kind
of
display
it
in
a
top
like
format.
That's
what's
missing.
I
A
A
What's
the-
and
this
is-
is
this
gathering,
so
this
is
gathering
client
metrics,
so
individual
client
lips,
ffs
instances
are
sending
metrics
to
the
manager.
That's
how
it's
actually
working
yeah.
A
A
B
C
B
J
J
The
lowest
fused
library
versus
people
say
dot
takes
first,
we
can
update,
they
opted
updated,
a
fuse
diffuse
to
link
to
the
lowest
the
fuse.
Then
the
fuse
has
some
future.
Something
like
it
can
do
this
piece.
Please
read:
please
write
for
the
I/o,
but
they
did
hire
and
met.
I
did
hello
that
we
can
use
it
for
it
and
it
will
use
it
to
reduce
some
memory
copy
in
the
IOPS
yeah.
That's
that's!
Basically,
that's
that's!
That's
not
about
it.
A
B
Some
pull
a
PR
up,
who
did
that
exports,
creation,
piece
and
and
I
think
is
it
Michael
fridge
has
has
similar
still
look
he's
up
for
the
orchestrator
part,
rework
in
some
of
the
way
that
some
of
the
objects
are
laid
out.
I
know
so
anyway,
the
plant
has
accounted
to
for
them
to
get
those
cleaned
up
and
get
emerged,
and
then
they're
gonna
do
the
last
little
bit
integrate
them
for
the
final
release.
B
At
least
the
export
creation
command
right
now
is
def
FS
NFS
export,
create
I,
think
it
should
be
set
and
I
think
we
should
have
a
top
level
command
for
the
NFS
name.
There's
set
NFS
whatever
you
want
to
do
after
that.
It's
a
small
thing,
but
I
think
that
would
be
good
to
have
it
fixed
yeah
beyond
that
I,
you
know
believe
it's
more
or
less
ready
to
go
yeah.
We
should.
We
do
need
to
stress
when
we
get
hand
this
off
to
people
that
this
is
for
casual
access.
B
Multiple
clustered
heads
and
it's
using
all
the
rattles
great
stuff
that
have
put
together
yeah.
It
should
work.
One
of
the
things
that
I'm
having
to
kind
of
push
back
on
with
is
that
the
like
rgw
wants
to
use
a
lot
of
Ganesha's
caching,
but
let
us
set
the
fence.
We
really
don't
want
to
do
that,
because
we
can't
respond
to
the
like
cache
pressure
as
easily.
So
we
want
to
make
sure
that
we
disable
as
much
as
if
Ganesha's
caching
as
we
can
let
live
so
festive.
B
A
B
Shouldn't
matter,
I
shouldn't
make
a
big
difference:
I
mean
we
there's
probably
some
memory
savings
we
can
do
and
stuff
like
consolidating
the
to
the
same
server.
But
you
know
the
bulk
of
it
is
going
to
be
cached
info
for
the
for
the
objects
themselves
that
are
on
the
file
system
and
so
or
in
the
you
know
in
our
GW.
So
you.
A
Okay,
good
yeah,
I
think
that
the
next
step
after
this
this
orchestration
and
if
s
stuff
goes
in,
is
to
basically
do
the
rgw
variant
well,
but
I
don't
want
to
complicate
it
after
time.
It
do
you
think
if
we,
if
we
change
this
to
self
SNFs
stuff
NFS,
that
I
thought,
do
you
think
that
this
it
would
be
the
same.
B
Similar
yeah
I
mean
we
might
need
to
play
us
that
we
might
need
to
set
some
global
settings
in
the
in
the
demons
differently
for
our
GW.
But
what
we
can
do
is
just
kind
of
like
like
if
you
are
creating
X.
You
know
when
you.
If
your
first
export
is
an
r
GW
one,
you
can't
create
a
for
solvent.
You
know
Seth
one
or
asset
investment
in
the
same
demon
and
vice
versa
right.
H
G
G
A
B
H
B
A
B
A
A
A
A
Okay
well
I,
like
I
like
this,
this
one
here
I
like
the
idea
of
getting
rid
of
the
FS
part
and
then
just
making
the
export
command
the
little
bit
FS
pic
so
that
we
can
slaughter
HW.
But
somebody
has
a
better
idea.
It
works
too.
B
G
A
B
A
Okay,
alright,
throw
out
my
general
hope
that
somebody
tackles
this
first
amma
would
have
soon
after
this,
and
even
if
it's
just
the
case,
where
there's
a
single
bond
by
instance,
and
no
scale
up
or
aj
just
cuz,
it
awfully
convenient
to
have
that
of
commands
like
stuff
of
us,
this
or
I'm
probably
stuff
to
you.
I,
probably.
B
Yeah
I
mean
you
know,
the
big
problem
was
with
SMB
I'll,
just
throw
this
in
case
people
are
aware,
but
the
table
forks
at
every
incoming
connection,
and
so
each
SMB,
you
know
so
every
incoming
connection
you
have
coming
in
it's
going
to
get
its
own.
It's
a
purgative
own
set
client,
and
so
those
will,
you
know
if
you
have
them.
So
if
you
have
multiple
SMB
clients
that
are
treading
over
the
same
files,
they're
going
to
be
revoking
each
other's
caps
all
over
the
place
and
it
really
kills
performance.
B
A
A
A
B
H
B
Make
that
point
to
that
Ganesha's
clustering
as
well,
that
we
don't
have
anything
that
excludes
access
for
other
other
non
NFS
clients
during
the
grace
period.
So
if
the
you
know
you
know,
server
goes
down
if
a
node
goes
down
right,
a
condition,
though,
because
they
own
it
comes
back
up
during
that
reclaim
period
someone
you
know,
another
DEATH
client
can
race
in
and
grab
caps
that
were
held
by
the
other
thing
is
that
some
client
by
some
NFS
client.
So
we
have
a
little
bit
of
exposure
there
to
with
consistency
but
yeah.
B
B
Yeah,
the
problem
is
that,
when
this
happens,
it's
really
hard
to
detect.
You
might
never
notice
right
yeah,
but
it
you
know,
you
can
mean
you
get
silent
data,
corruption,
yeah
and
that's
kind
of
nasty
and
do
things
like
you
know.
You
think
you
hold
this.
You
know
file
walk
over,
but
then
the
thing
crashed
and
came
back
and
then
someone
else
finally
took
that
block
and
released
it.
While
you
were
gone
and
change
the
file
out
from
under
you,
you
know
stuff
like
that.
It's
pretty
gross,
they
can
be
really
hard.
B
Didn't
even
know
that
that
happened.
I
had
I
had
a
conversation
with
this
with
somebody
and
I
think
they
just
like
they
release
noted
it
and
they
tried
to
prevent
people
from
doing
due
to
a
protocol
access,
but
I'm
pretty
sure
there
were
still.
There
were
still
support
cases
that
came
in
as
a
result,
I
suppose.
H
A
B
But
we're
almost
to
this
list
with
our
five
minutes
left
the
other
things
that
these
are
all
just
copied
off
of
our
Trello
board
at
this
point,
but
we
have
the
batch
file
create
and
unlink
Jeff
and
Sean
I've
been
working
on
that
it
creates
a
lot
of
different
yet
well,
we've
been
we've
done
a
synchronous
ones,
so
we're
not
batching
alright,
alright
matter
right.
B
C
G
B
And
the
looks
at
best
a
sync
file
creating
on
link
which,
yes,
it's
almost
done,
I
think
that's
actually
in
my
last
most
recent
integration
branch
or
maybe
I'll,
be
on
the
next
one.
The
autoscaler
plugin
for
adding
MDS
is
I.
Don't
think
anyone
started
on
that,
though
wait
no
I
think
someone
has
been
working
on
it.
B
And
then
these
are
of
things
that
Patrick
wanted
to
do
when
he
gets
back
or
at
least
make
sure
happening
or
Pacific,
and
just
in
the
QA
Suites
we
have
mechanisms
for
thrashing
the
directory
fragmentation
to
turn
it
up
and
down,
but
they're
not
widely
invoked
in
the
tests.
So
we
wanted
to
just
like
make
that
just
sort
of
happen
randomly
across
all
the
work
units
and
then
I
think
merge
the
Suites
together.
Now
that
more
stuff
work
is
like
considered,
stable
and
works.