►
From YouTube: Why libshare? WHY! by George Wilson
Description
From the 2019 OpenZFS Developer Summit
slides: https://drive.google.com/open?id=10tUtaPWZ_WWPO6RofChfzoLylFudRnIa
B
Those
are
the
unlucky
few
that
have
kind
of
gone
into
areas
that
are
are
strange,
so
I
know
like
the
the
title
for
this
was
live.
Zfs
or
live
share
is
broken
in
Linux,
I
kind
of
like
to
say
why
live
share,
there's
a
lot
of
frustration
that
we
encounter
as
we
started,
dealing
with
kind
of
doing,
using
share
and
FS
in
Linux,
and
so
this
talk
is
going
to
kind
of
focus
on
some
of
those
things.
B
B
Okay,
yeah
lucky
you
so
so
our
product
actually
uses
share,
and
if
s
pretty
extensively,
we
we
have
that
as
the
mechanism
to
export
our
NFS
file
systems
out
to
the
environments
where
we're
serving
up
data,
and
so
when
we
started
looking
at
doing
the
transition
between
Linux
or
illumos
to
Linux.
B
One
of
the
things
that
we
encountered
was
like
well,
what
are
we
gonna
do
with
existing
customers
that
have
the
sheriff
s
property,
so
it
was
natural
for
us
to
just
simply
say
we're
just
going
to
carry
this
over
and
we
have
to
deal
with
it
and
hopefully
it
all
just
works
when
we
get
to
Linux
as
we
started
digging
into
it,
we
found
that.
Well,
it
doesn't
all
just
work.
So
there
are
some
really
interesting
things
about
the
lib
share
implementation.
B
There
Lumos
that
we
kind
of
take
for
granted
when
we
started
looking
at
it
from
Linux.
For
example,
it's
got
like
built-in
locking.
So
you
know
our
our
product
has
like
hundreds
of
threads
that
are
simply
all
going
out
and
trying
to
share
different
file
systems
at
one
time.
Live
share
on
a
lumo
just
handles
that
on
Linux
totally
falls
apart,
so
it
also
is
very
tightly
coupled
with
NFS,
which
had
some
really
interesting
things
and
I'll
go
into
like
why
that
was
critical
for
us
and
kind
of
what
we're
doing
to
deal
with
it.
B
So
our
journey
into
using
sheriff's,
we
kind
of
encountered
like
three
main
problems-
the
first
I
kind
of
alluded
to
was
concurrency
issues.
We
would
lose
shares
that
would
actually
be
exported
out
because
they're
trying
to
update
both
this
the
share
tab,
which
is
in
like
Etsy
DFS-
and
this
is
a
carryover
from
the
other
Lumos
implementation
really
on
Linux.
It
doesn't
use
it
other
than
the
code
is
trying
to
use
it.
B
So
I
think
part
of
the
problem
here
is
that
we
have
like
this
extra
file
that
really
isn't
necessary
on
the
lake's
implementation
on
freebsd.
My
guess
is,
this
is
probably
not
used.
This
is
anybody
who
didn't
know
do
they
use
the
share
tab?
You
know
so
so
that
was
one
of
the
things
that
we
encountered.
We
actually
found
this
to
be
a
problem
and
like
for
us.
B
The
simple
solution
was
like
okay,
whoops
tax
and
file,
locking
in
here
it's
kind
of
a
cork
or
spring
lock,
but
we'll
serialize
access
to
this
file
so
that
it
can
be,
you
know,
up-to-date
and
doesn't
go
stale.
We
then
started
finding
like
scalability
issues
where
like.
If
you
had
a
lot
of
file
systems,
we
quickly
found
that
the
Linux
implementation
of
live
share
would
like
constantly
be
reading.
B
Proc
self
mounts
over
and
over
and
over
so
much
so
that,
like
you
know,
just
doing
a
simple
share
of
a
file
system
could
take
minutes
just
doing
nothing.
But
reading
this
on
a
repeated
basis
and
what
we
encountered
was
that
part
of
the
implementation.
There
is
not
leveraging
some
of
the
built-in
caffeine
that
was
put
into
the
sheriff.
B
If
you're
using
this,
you
may
not
necessarily
encounter
stale
file
handles,
but
there
are
certain
circumstances
where
you
can
see
this,
especially
if
you
have
a
lot
of
file
systems
on
Linux.
The
first
case
where
we
encountered
this
was
just
simply
rebooting
the
NFS
server.
B
So
if
you
rebooted
the
NFS
server,
you
could
get
into
into
situations
where
all
of
a
sudden
you,
your
client,
would
try
to
do
a
mount
and
it
would
get
a
stale
fine
file
handle
and
then
shortly
later,
if
you
tried
to
do
the
mount
again
and
succeed,
so
it's
kind
of
a
very
intermittent
problem
and
I'll
go
into
exactly
why
this
happens.
So
first,
let's
talk
a
little
bit
about
the
way
that
I
we
actually
do
kind
of
the
coordination
between
NFS
and
CIFS
shares
on
Linux.
B
So
on
Linux
we're
using
system
D
and
the
server
setup
is
kind
of
looks
like
this.
We
have
NFS,
mountainy
or
sorry.
The
first
one
should
be
NFS
config
T,
then
NFS
mountainy,
which
has
like
RPC
mount
and
all
those
are
dependencies
of
the
NFS
server.
This
is
where
like
NFS
T
is
actually
run
and
then
ZFS
share
depends
and
runs
after
the
NFS
server.
B
B
So
is
the
question
is:
is
this
the
same
story
for
V
4
and
V
3?
Yes,
so
the
same
thing
will
happen
now.
One
of
the
questions
that
I
have
for
those
in
the
freebsd
world
is:
is
there
a
similar
gap
in
that
platform
and
I?
Don't
know
if
folks
I
haven't
done
it
enough
investigation,
but
it
would
be
very
interesting
to
see
if
this
and
and
we'll
go
into
a
little
bit
more
detail
so.
B
So
the
first
thing
we
thought
of
is
like
okay,
let's
just
reorder
this,
let's
just
simply
say:
okay
well,
we'll
take
the
ZFS
share
service
and
we'll
make
it
run
first,
so
we'll
put
share
at
the
beginning.
That
way
when
this
starts,
we
run
them
in
this
order.
We'll
share
out
then
we'll
start,
our
PC
config
be
Mountain
D,
and
then
you
know
so
forth.
B
But
the
problem
here
is
that
see
if
I
shared
an
X
for
FS
I
and
the
NFS
server
on
Linux
does
an
export,
SS
R,
which
doesn't
look
at
anything,
that's
not
in
the
NT
X
plus
5.
So
all
these
exports
that
are
done
simply
are
ignored
by
the
time
you
actually
start
the
NFS
server,
so
this
was
kind
of
like
okay,
so
it's
like,
okay,
even
just
simply
reordering,
doesn't
necessarily
work.
We
have
to
do
something
different
here.
B
So,
as
we
kind
of
thought
about
this,
you
start
to
kind
of
build
this
set
of
requirements
before
we
actually
moved
into
our
solution,
and
the
thing
that
we
encountered
was
that
we
need
the
sheriff
s,
logic
to
actually
run
before
mount
D
and
NFS.
T
start
so
that
was
one
of
the
things
that
we
need
in
order
to
be
able
to
handle
this
this
particular
case,
and
we
need
to
somehow
closely
tie
them,
so
they
work
in
unison
with
each
other,
which
was
something
that
was
really
convenient
in
the
illumos
live
share
case.
B
But
when
we
start
looking
outside
of
illumos,
lib
share
really
is
kind
of
a
hindrance
for
most
platforms,
because
not
everybody
has
the
same
luxury
of
having
that
close
tie.
So,
whatever
solution,
we
event
we
end
up
with.
We
want
to
be
able
to
work
across
multiple
platforms
that
don't
have
this
close
tie
between
ZFS
and
NFS,
so
our
solution
was
well
will
utilize
the
Etsy
exports
mechanism
that
exists.
That
way,
we
don't
have
to
change
the
NFS
service.
We
know
that
the
NFS
service
always
looks
there.
B
So
maybe
you
can
leverage
this
because
it's
automatically
consumed
by
the
NFS
service
when
it
when
it
starts,
and
so
then
we
thought
well.
Okay,
what?
If,
instead
of
doing
like
a
ZFS
share
a
which
is
sharing
out
each
filesystem
one
by
one,
we
already
know
how
to
convert
these
Sharon
FS
properties
and
then
Linux
can
interpret
for
export
FS.
B
I'll
talk
about
some
of
the
performance
aspects
of
this
compared
to
NFS
to
DFS
shared
si
and
at
the
same
time
we
said.
Okay.
Well,
let's
introduce
some
kind
of
cleanup
service
at
the
end
that
can
deal
with
this.
So
we
effectively
have
like
a
start
point
that
does
some
work
and
the
end
point
after
NFS
is
finished
running
and
it
kind
of
looks
like
this.
So
again,
we
have
kind
of
like
this
dependency
graph,
where
we
have
the
share
service
at
the
end.
B
Now
is
running
a
ZFS
shared
FG
to
generate
something
that
NFS
service
can
run
and
some
cleanup
service
at
the
end.
That
simply
is
going
to
remove
it
and
what
we
chose
to
do
use
is
in
Linux.
There
is
the
concept
of
having
an
export
study
directory
where
you
can
specify
and
put
files
in
there
that
get
consumed
by
the
NFS
server.
So
we
have
a
ZFS
dot
exports
file,
it
gets
generated
by
the
ZFS
shared
SJ
and
then
just
simply
gets
removed
after
it's
been
consumed
by
NFS
server.
Is
that
okay?
B
A
B
B
So
we
were
still
seeing
these
cases
where
you
could
have
a
restart
of
an
NFS
server.
So
this
is
a
slightly
different
case.
You
restarted
the
systemd
service,
you
generated
it
and
then,
at
the
same
time,
your
application
is
simply
just
trying
to
share
a
new
file
system
and
if
it
tries
to
share
the
new
file
system
somewhere
in
the
middle,
then
by
the
time
the
client
tried
to
mount
it,
it
would
still
fail.
B
So
we
kind
of
went
back
to
the
drawing
board
and
said
okay.
Well,
we
need
to
figure
out
how
we
close
this
gap
and
to
do
that
we
said
okay.
Well,
we
have
kind
of
this
endpoint
service,
which
we
are
calling
a
cleanup
service.
Why
not
add
more
logic?
There
we
know
that
at
the
very
beginning
were
generate
we're
generating
all
the
exports
that
existed
at
the
time
that
we
restarted.
B
The
service,
if,
at
the
end
we
simply
regenerate,
what's
left,
we
can
compare
the
two
and
make
a
determination
if
there's
new
file
systems
that
need
to
be
added
and
then
we'll
introduce
an
export
SS
a
to
cover
all
the
use
cases,
not
only
what's
in
at
the
exports
with
any
exports
that
may
have
been
done
manually
without
adding
them
into
the
file
and
that's
kind
of
where
we
ended
up.
There's
still
issues
here,
so
this
is
kind
of
like
gotten
us
98%
of
the
way
there,
but
it
isn't
totally
complete.
B
And
so
one
of
the
things
that
I'm
really
interested
in
here
is
how
we
to
deal
with
this.
When
we
look
at
other
platforms
like
three
BSD
Mac
OS
Windows,
where
we
are
just
another
file
system
and
have
to
deal
with
the
dependencies
of
NFS
and
in
the
future
with
you
know,
SMB,
how
do
we
deal
with
that
to
make
sure
that
we're
playing
nicely
or
they're
playing
nicely
with
us?
B
So
this
is
what
it
kind
of
looks
like
kind
of
completing
the
chain.
We
added
more
logic,
that's
cleanup!
So
really
you
can
kind
of
think
of
this
as
like
there's
an
initialization
of
the
sharing
some
NFS
work
and
then
some
cleanup
work
or
fini
that
actually
happens
at
the
end.
That's
kind
of
the
model
we're
looking
at
to
try
to
close
the
gap,
but
again
we're
interested
to
see
how
that
plays
with
other
file
systems
and
what
people
other
people
have
done.
B
B
If
we
look
at
what
Lib
share
did
for
Lumos,
which
is
actually
pretty
pretty
nice,
it's
not
possible
to
accomplish
the
same
thing
on
other
platforms.
We
just
don't
have
the
same
luxury.
So
with
that
in
mind,
we're
trying
to
think
of
what
live
share
really
should
do,
and
it
might
be
a
very
simple
type
of
approach
of
like
you
call
into
Lib
share
each
platform,
kind
of
defines
it
or
platform
specific
components,
and
it
knows
how
to
do
things
like
export
a
file
system.
However,
you
choose
to
do
that.
B
It
knows
how
to
find
if
file
system
has
actually
been
exported.
However,
you
you
know
you
choose
to
do
that.
It
knows
how
to
talk
to
the
you
know:
system
services,
whether
it's
talking
directly
to
mount
D
or
killing
mount
D
or
restarting
mount
D,
whatever
it
needs
to
do
it
kind
of
gets
all
handled
in
Lib
share,
but
the
way
it's
implemented
today,
I
think
we
have,
as
a
community,
tried
too
hard
to
make
live,
share,
look
like
a
Lumos
live
share
and
we
need
to
now
say:
okay.
B
What
Lumos
has
this
kind
of
an
exception?
What
we
need
for
the
other
platforms
really
is
what's
going
to
define
the
norm
and
how
we
need
to
implement
these
going
forward
if
we
want
share
and
FS
properties
to
work
kind
of
seamlessly
across
platforms.
So
that's
going
to
be
the
thing
that
we're
going
to
be
focusing
on
from
linux,
specifically
we're
going
to
be
using
the
export
dot
d
directly.
B
We
have
some
ideas
here,
based
on
the
work
that
we've
already
done
and
we're
kind
of
kicking
the
SMB
problem
down
the
road,
because
one
of
the
things
that
live
share
did
provide
the
most
was
a
common
mechanism
to
share
not
only
NFS
but
SMB,
and
we
don't
want
to
solve
that
problem.
Just
yet
so
so,
first
phase
will
really
be
focusing
on
NFS
directly.
A
A
B
Yeah,
so
one
question
was:
why
are
we
removing
the
exports
file
at
as
a
cleanup
phase,
and
the
main
reason
that
we
do
it
in
this
implementation
is
because
that
file
isn't
getting
updated
in
place
when
other
shares
come
in.
So
we
didn't
want.
You
know
our
support
organization
or
anybody
else.
That
was
looking
at
the
system
to
look
at
this
and
say:
oh
the
reason
it's
not
export
is
because
it's
not
in
the
export
study
file.
Well,
that's
true,
because
that
file
gets
generated,
consumed
and
then
goes
stale.
So
that's
why
we
remove
it.
B
Yes,
in
the
future.
That
plan
is
that
file
will
stay
and
it
will
get
updated
in
place.
It
will
have
all
the
information
so
effectively
for
Linux
it's
taking
the
place
of
what
the
share
tab
was
doing
really
will
now
live
in
an
exports
file,
and
we
have
you
know:
we've
looked
at
things
like
to
avoid
locking
you
know,
because
we
have
this
directory.
What,
if,
like
every
file
system
that
you
share,
has
its
own
file?