►
From YouTube: Ceph Developer Summit Quincy: CephFS Follow-up
A
A
You
want
to
follow
along
locally.
I
think
everyone
should
be
able
to
have
access
to
that,
because
it's
a
public
board
and
for
those
of
you
who
are
not
familiar
with
it
again,
this
is
an
upstream
trello.
I
think
stage
uses
this
a
lot
to
try
to
keep
track
of.
What's
going
on
in
the
project
as
a
whole.
A
A
And
this
is
also
just
not
an
excerpt,
not
just
an
exercise
and
updating
the
trello.
It's
also,
we
actually
are
trying
to
plan
out
what
we're
going
to
work
on
for
for
the
quincy
release
all
right.
A
Going
through
all
these
cards
that
we
have
in
this
ffs,
where
we
call
these
boards
anyway
and
just
give
like
status
updates
on
where
we
are
and
whether
or
not
we
want
to
commit
to
getting
it
done
in
the
quincy
release.
If
we
are
then
we'll
keep
the
quincy
label,
otherwise
we'll
remove
it
and
it
can
just
stay
backlogged.
A
A
So,
let's
begin
with
multi-mds
export
thrashing.
I
believe
this
is
you've
been
working
on.
A
C
Yeah,
so
this
pr
is
ready,
patrick,
but
one
of
the
the
tests
fail
because
of
an
issue
with
the
mds
for
which
ad
sent
a
badge.
I
think
that
needs
to
be
merged
in
first
then,
only
the
then
only
all
the
tests
are
gonna
pass.
B
B
A
Versions
so
raise
the
health
warning
if
mds
demons
are
not
version
matched,
so
this
one
was
very
specific
at
the
time
I
wrote
the
feature,
but
I
believe
recently,
we've
added
just
broad
set
for
support
for
raising
a
warning.
If
there's
mixed
versions
in
the
ceph
cluster
for
any
demon
type,
I'm
not
absolutely
sure
on
that.
B
A
C
This
one
I
had
abandoned
because
mark
nelson
wasn't
happy
with
the
current
approach
of
tracking
the
rss.
He
wanted
to
go
for
a
priority
caching
approach,
so
I've
taken
this
up
again
right
now,
I'm
working
on
a
draft
pr
for
that,
so
I'll
probably
send
another
pr
based
on.
A
We'll
have
a
separate
discussion
on
that
I'll
leave
a.
D
A
This
one's
got
an
interesting
history,
so
this
started
out
as
a
tracker
ticket
by
shuon.
A
A
I
don't
think
we're
gonna
make
any
headway
on
this,
especially
now
that
jung's
left
the
the
group.
So
I'm
not.
I
don't
think
this
will
get
done
for
quincy
either
unless
something
dramatic
happens.
So
I
think
we'll
table
this
for
now
and
keep
it
in
the.
A
A
So
I
know
we
had
some
changes
to
sefuse
that
young
had
made
for
lazy.
I
o
purposes
that
cern
was
very
interested
in,
but
I
don't
think
we
had
any
corresponding
changes
to
the
kernel
client.
So
I
made
tracker
ticket
and
I'm
not
even
sure
what
the
delta
is
between
the
kernel,
client
and
the
fuse
client
is
and
we're
not
really
testing
either
one
in
upstream
qa.
So
it's
not
really
clear.
E
I
think
we
should
cancel
this
thing.
I
I
I
don't
see
I
mean
putting
this
in
means,
adding
in
goofy.
F
E
B
D
E
E
A
Feature:
it's
not
really
implemented
right.
B
E
I
don't
think
this
is
something
we
ought
to
do.
I
think
we
should
just
get
rid
of
lazy.
I
o
because
there's
no
there's,
you
know
you
need
a.
B
E
A
Yeah,
when
I
was
talking
to
jungle
about
this,
however
long
ago,
the
way
the
way
I
was
hoping
this
he
has
turned
this
on-
would
be
some
automatic
way
with
a
subtree,
a
directory,
vx
adder
that
affects
the
whole
subtrees.
Consistency
sticks
junk,
didn't
like
that.
For
reasons
I
don't
recall,
so
he
added
in
just
configs
that
turn
it
on
for
the
client
across
the
board.
I
believe,
and
that
seemed
to
work
well
enough
for
what
cern
was
doing.
A
I
don't
think
we
can
remove
lazy
aisle
right
now,
just
because
fern,
at
least
I
know,
is
using
it
and
they
like
it,
so
can't
really
remove
it
at
least
right
now,
and
not
until
we
come
up
with
a
replacement.
E
B
A
All
right,
let's
move
on
to
root
squash.
D
A
A
A
No
one's
working
on
this
right
now,
I
think
it's
a
pretty
cool
ticket,
but
it's
also
it's
definitely
non-trivial.
G
So
right
now
there
isn't
downstream
demand
for
it.
So
unless
there
is,
you
know,
obviously
it's
up
to
you
for
upstream,
but
the
there
isn't
downstream
demand
for
this
right
now.
G
I
think
that
this
feature
would
induce
some
downstream
demands,
but
it
certainly
is
not
on
the
priority
list
or
the
radar
from
a
downstream
perspective.
Although
I
echo
your
comment,
it'd
be
super
cool.
A
Yeah,
so
this
is
a
pretty
big
feature
and
I
have
the
beginnings
of
like
what
it
would
look
like
in
this
tracker
ticket.
But
it's
by
no
means
like
I
haven't
thought
about
it
extremely
hard
and
there's
probably
it
would
probably
be
plenty
of
reasons
to
adjust
this
interface
that
I'm
proposing.
A
Anyway,
does
anyone
want
to
commit
to
try
to
work
on
this
for
quincy?
A
This
would
be
an
excellent
excuse
to
dig
into
the
file
layouts
and
the
mds
locker,
which
I
believe
you
would
need
to
become
familiar
with
in
order
to
to
actually
do
this
migration
in
a
safe
way.
B
A
Know,
as
you
said,
there's
no
downstream
push
for
this.
You
hear
about
it
periodically
on
upstream.
I
don't,
I
don't
think
it
has
to
be
in
quincy
and
therefore
I'm
happy
leaving
this
in
the
backlog.
A
G
No
patrick
is
still
around
he
just
we
were
just
having
a
conversation
right,
quick,
I'm
just
kind
of
talking
about
what
if
we
were
gonna
work
on
this
feature
or
leave
it
in
the
backlog,
and
I
for
one
would
love
to
have
it
in,
but
I
don't
think
I
don't
think
there's
enough
priority
on
it
relative
to
how
much
work
it
is
to
commit
red
hat
resources
to
it.
Yet.
A
So
this
card
came
about.
We
were
before
we
even
were
discussing
stuff
smear,
potentially
as
a
tool
to
do
the
things
steph
vestmir
is
doing
by
enhancing
rsync,
but
now
that
we
have
cephas
mirror,
I'm
inclined
to
close
this.
F
Yeah,
this
can
be
done
a
bit
well,
if,
once
we
have
the
r
stats
fixed,
which
is
what
millin
is
trying
on
trying
trying
to
the
immediate
demand,
can
switch
using
the
rc
time
for
incremental
transfers,
or
rather
not
only
incremental
transfers,
but
also
like
only
choosing
those
sub
trees
with
needs
which
have
any
changes
in
the
entire
tree.
So
yeah
this
is
pretty
much
redundant.
I
guess.
B
H
A
Yeah,
I
think
we
can
archive
this
one
too.
At
the
time
we
were
considering,
this
pg
merging
still
did
not
exist
and
also
you,
you
kind
of
had
to
have
a
like
a
minimum
number
of
pgs
for
a
pool.
I
think
it
was
like
32
or
something
and
it
had
to
be
fairly
high
and
if
you,
if
we
ever
wanted
to
get
to
a
point
where
you
had
dozens
or
even
hundreds
of
file
systems
in
a
cluster,
you
you,
you
would
have
too
many
pg's
being
being
allocated
for
that.
A
A
Technically,
I
don't
see
any
particular
issues
with
doing
this,
maybe
a
little
tricky
with
mapping
the
snap
around
with
con
concept
onto
files
somehow
and
rather
than
directories.
A
There's
no
upstream
push
for
this.
There's
no
downstream
push
for
this.
A
This
would
just
potentially
be
a
learning
exercise
with
a
goal
with
a
goal
for
learning
more
about
snapshots
and
mds
which,
as
we
talked
about
before-
and
I
didn't
really
bring
up
in
this
particular
meeting-
a
larger
goal
for
quincy,
I
think,
is
building
our
our
expertise
in
the
mds
and
not
having
such
a
dramatic
push
for
a
number
of
features
that
we've
been
doing
for
the
last
few
years.
A
D
A
I
A
I
A
A
D
A
A
F
B
D
E
That's
a
lot
more
difficult
than
it
sounds
at
first
there's,
no
real
plumbing
in
the
kernel
at
the
vfs
layer,
for
I
notify
it's
all
done
so
there's
no
there's
not
like
an
operation
into
the
to
you
know,
add
a
book
for
the
file
system
to
do
notification.
E
Apis,
it
would
probably
be
a
tough
thing
to
get
in.
I
think
that
this
will
be
harder
than
it
sounds.
The
code
itself
is
probably
not
too
difficult.
I
mean
you
could
just
do
it
with
some,
like
caps
bust,
a
bunch
of
caps
and
try
to
hopefully
get
something
after
that,
but
I
am
plumbing
it
in
through
the
vfs.
It's
not
going
to
be
trivial.
D
A
We
we
have
had
some
downstream
pressure
for
this
in
the
past,
so
jeff,
you,
you
think
the
main
barrier
here
is
that
we
need
to
have
books
in
the
vfs
for
for
I
notify,
but
that
that's
definitely
gonna
be
non-trivial.
A
Is
it
at
all
feasible
to
work
on
this
or
or
what
do
you?
What
do
you
think.
E
I
don't
know
I
mean
I
think
you
probably
have
a
tough
time
getting
it
past
alvaro
first,
just
to
get
the
just
the
vfs
work
and
then
you
have
to
actually
write
the
code.
Didn't
make
self
do
it,
and
it's
not
clear
to
me
what
this
would
look
like
anyway.
E
I
know
the
sis
guys
had
you
know
had
similar
plans
at
one
point
and
it
pretty
much
never
happened,
because
because
we
were
never
able
to
get
bfs
apis
and
of
course
they
didn't
really
have
anybody
work
on
it
either,
but
different
matter.
E
Can
keep
it
up
here
if
you
want,
I
I
don't
perceive
myself
working
on
it
or
you
know.
If
somebody
else
really
wants
to
pick
it
up
and
do
it
and
back
yourself
out,
but
but
I
think
they
turned
out
to
be
a
pretty
tough,
pretty
tough
to
get
it
actually.
G
Yeah
I
can
see
how
much
how
much
need
there
is
for
this
downstream.
I
haven't
heard
of
it,
but
that
doesn't
mean
that
doesn't
mean
I
necessarily
know.
So
I
can
take
an
action
to
check
on
that.
I
B
All
right,
let's
move
on.
A
Yeah
this
is,
I
don't
think
we
just
immediately
table
this
is
gonna
would
require
a
huge
rewrite
of
the
mds
and
we're
not
doing
that
right
now.
H
Is
it
better
yeah?
We
talked
about
this
ticket
a
couple
of
times
and
you
don't
need
to
take
this
ticket
both.
D
A
E
So
this
would
just
be
for
like
loops
ffs
right
or
I
guess
at
the
mds
later.
E
Yeah
yeah
because
the
recycle
stuff
never
got
merged,
but
that
doesn't
mean
we
can't
use
it.
E
Well,
I
mean
it's
useful,
for
you
know.
Potentially
ganesha
could
use
this
right.
You
could
show
you
know.
Nfsv4
apples
and
samba
can
do
the
same
right,
but
you
know
there's
no,
the
nfs
client
in
the
colonel
you
know
works
with
university,
fair
ankles.
You
know,
despite
the
fact
that
there's
no
reach
out
of
support
in
the
kernel
right.
So
so
it's
really
not.
I
think
it's
probably
still
valuable
to
do
it.
I
just
was
pointing
out
that
the
you
know,
client
side
bits
for
this
and
the
kernel
never
got
merged.
E
A
E
My
guess
is
that
they're
just
trying
to
avoid
so
name
bumps
in
the
c
api,
because
those
are
kind
of
painful,
so
they
were
talking
about.
You
know
splitting
off
lips,
ffs
to
be
sort
of
a
bit
more
stable
and
then
have
the
underlying
c
plus
plus
api
and
api
changes
could
be
a
little
bit
more
aggressive.
B
A
Support,
I
think
this
should
just
select
a
file
system
at
the
start,
or
maybe
have
you
know.
If
we
get
really
fancy
with
the
ui,
you
can
bring
up
a
list
of
file
systems
and
then
you
can
select
one
and
then
it
loads
up
all
the
client
sessions
for
that
file
system.
E
Yeah,
what
I'd
like
to
see
is
just
a
command
line
option
that
says
you
know
dash
dash
fs,
you
know
whatever
the
name
is
right
for
that
and
for
it
would
be
nice
because
you
got.
A
F
Someone
to
work
on
it,
the
tricky
thing
is,
I
think
we
need
changes
in
mgr
stats
manager,
stats
module.
I
think
right
now.
What
happens
is
if
there
are
clients
on
another
file
system,
those
stats
just
throw
up
as
not
available,
maybe
I
think
yeah.
So
we
need
manager
stats
to
be
fixed
to
handle
multiple
file
systems
before
we
can
add
the
fstop
command
params
to
to
sell
to
show
clients
from
from
a
particular
file
system.
B
A
All
right
leave
this
in
quincy
for
now
and
find
source
work
for
it
later.
E
B
A
All
right
manager
volume
supports
snapshots
for
some
volume
groups,
but
we
turn
this
off
because
it
seemed
like
we
wouldn't
be
able
to
have
snapshots
on
parent
directories
of
sub
volumes,
especially
after
jung's
change,
to
add
a
sub
volume
flag
for
for
directories
so
that
they
can't
potentially
share
snapshots
with
other
sub
volumes,
and
you
also
can't
rename
or
hard
link
a
file
outside
of
its
sub
volume
directory,
and
that
was
to
ensure
certain
guarantees
for
for
snapshot
scalability.
A
But
I
think
that
you
can
actually
set
a
snapshot
on
a
parent
directory.
I
mean
I've
done
it
and
in
that
it
it
as
long
as-
and
you
know
they-
you
know,
with
the
same
hardline
guarantees,
it
should
work,
but
I
have
not
done
any
performance
tests
for
this
and
there
may
be
a
gotcha.
A
A
All
right
moving
on
lipstick,
sqlite.
This
is
a
gsoc
project,
the
complementary
library
to
the
new
loopstep,
sqlite
library.
So
I'll
mark
this
for
quincy,
because
we're
gonna
have
a
gsox
student,
probably
there's
no
guarantee!
We
get
a
g
sock
slot
that
we
don't
then
I'll,
take
this
out.
A
F
Yeah
post
dha,
but
this
is
the
one
on
my
list,
so
the
part
one
take
time
which
is
next
in
the
list
after
this
in
trello,
so
that
won't
take
much
time.
It's
mostly
just
writing
tests
and
fixing
any
bugs,
but
post
that
it's
just
yeah.
This
is
the
one
I'm
trying
I'll
probably
be
working
on.
F
E
Yeah,
I'm
still
waiting
for
them
to
see
if
the
rework
gets
in
this
coming,
but
the
reward
gets
in
this
coming
cycle
so
we'll
just
have
to
how
that
goes.
A
B
A
A
All
right
last
on
my
list
is
client
expose
auth
mds
for
fileder.
This
is
a
tracker
ticket.
I
dug
up,
I
thought
was
kind
of
cool,
so
this
one
is
just
adding
a
vx
adder
to
in
the
client.
I
don't
think
it
needs
to
talk
to
them.
Yes
to
just
stay
where
the
authoritative
mds
for
a
file
is
or
directory.
A
A
So
I
think
you
would
probably
just
if
you're
doing
it
on
a
directory.
You
would
actually
look
at
the
containing
directory.
A
A
That's
it
for
what
I
what's
in
the
trello.
That's
not.
There
are
some
things
that
probably
are
on
red
mine
that
aren't
on
trello.
Yet
is
anyone
working
on
something
that
they
think
should
be
tracked
on
trello.
D
B
A
H
A
Gotta
dig
up
I'll
dig
up
the
tracker
tickets
for
these.
What
else
are
we
missing?
Okay,
this
is
not
set
in
stone.
So
if
we
forgot
something-
and
you
remember
something
you're
working
on
that
should
be
tracked
in
the
trello-
please
let
me
know.
A
Page
uses
this
a
lot
for
doing
a
lot
of
his
talks,
so
it
is
valuable
to
make
sure
that
you
know.
If
you
want,
you
know
we
will.
We
all
want,
make
sure
that
we
publicize
properly
what
changes
we've
been
making
to
to
zfs.
So
it's
important
to
keep
the
trello
updated.