►
Description
WebNative File System (WNFS) - presented by @expede, @matheus23 at IPFS bing 2022 - IPFS Implementations - https://2022.ipfs-thing.io
GitHub organization for WNFS: https://github.com/wnfs-wg/
WNFS specification repository: https://github.com/wnfs-wg/spec/
@expede and @matheus23 work at Fission:
https://fission.codes
https://github.com/fission-codes/
A
Thanks
everyone
for
joining,
I'm
going
to
be
talking
about
the
web
native
file
system,
or
sometimes
we
like
to
say
it's
sort
of
like
a
unix,
fs
plus
plus
or
beyond
unix
fs,
which
means
that
this
talk
is
going
to
be
a
little
bit
different
from
the
last
couple,
which
were
more
about
full
nodes
or
how
to
get
things
moved
around.
And
this
is
more
about
specifically
the
file
portion.
B
And
hi,
I'm
phillip,
I'm
protocol,
engineer
version
and
my
handlers
matthews23.
A
And
we
only
have
15
minutes,
so
we
won't
be
able
to
talk
about
everything
in
depth,
but
please
do
come
to
talk
to
us
after
we've
learned
a
lot
along
the
way.
Building
this,
and
even
if
you
don't
end
up
using
all
of
web
native,
feel
free
to
steal
as
many
of
these
ideas
as
you
like.
A
In
addition,
so
that
requires
all
the
stuff
below
that,
like
access,
control
and
being
extensible
and
collaborative
and
work
in
hostile
places
like
browsers
right
all
of
this
stuff,
and
ideally
we
also
want
users
to
control
their
own
data
and
be
able
to
use
data
across
applications.
If
they
want
to
so
yeah,
you
can
pull
up
a
file
explorer,
but
you
should
also
be
able
to
take.
A
You
know
if
you're,
building
an
application
and
have
written
a
json
file,
you
should
be
able
to
also
explore
that
inside
of
a
file
explorer
separately
so
essentially
just
like
how
you'd
expect
it
to
work
on
windows
or
a
mac
os,
we
should
get
the
same
experience
in
the
browser
or
really
anywhere
this.
This
runs
today.
We
have
an
implementation
for
the
browser.
A
We
are
rewriting
it
in
rust
and
webassembly,
so
that
it'll
run
absolutely
everywhere,
so
at
a
very
high
level
at
the
top.
Of
course,
you
have
some
sort
of
mutable,
pointer,
ipns
or
dns
link
and
nested
under
that
public
files,
secret
files-
and
we
won't
talk
about
these
because
of
time,
but
a
sharing
inbox
and
sharing
outbox
the
public
file
system
encapsulates
the
the
basic
data
model
that
this
this
works
with
so
yep.
We
have
regular
ipld
nodes,
but
also
extended
file
nodes
that
have
raw
data
and
then
a
metadata.
A
That's
fully
extensible.
You
can
write
arbitrary
data
into
that,
just
like
you
can
on
any
modern
file
system,
and
this
is
broken
into
two
components:
there's
the
user
space
and
kernel.
So
it's
essentially
what
the
system
manages
for
you
versus
what
users
can
actually
write
into
and
that's
arbitrary
tags,
mime
types
sources
commit
messages,
kind
of
whatever
you'd
like
and
then
of
course,
directories
which
can
nest
more
data.
On
top
of
that,.
A
Hard
and
soft
links,
obviously,
content
addresses,
get
us
this
new
kind
of
link
that
we
haven't
seen
before
on
the
web
right.
You
can
think
of
this
as
a
hard
link
from
traditional
file
systems,
right
where
it
says
it's
exactly
this
file,
and
if
you
have
the
same
file
multiple
times,
you
get
deduplication
all
the
stuff
that
we
love
from
ipfs,
but
we
found
we
also
needed
support
for
soft
links
or
sim
links,
and
this
really
behaves
like
a
url.
A
A
We
also
found
for
for
a
bunch
of
reasons,
including
crdts,
which
I'll
get
to
in
a
in
a
later
slide
that
we
wanted
versioning.
So
by
default
nothing
gets
deleted.
You
only
overwrite
files
and
previous
versions
stick
around,
so
you
can
think
of
this
really
like
git.
A
So
over
time.
Here's
one
file
syste
one
layout
of
one
directory
and
then
we're
going
to
add
a
file
this
headshot
and
all
that
we
do
is
add
a
new
version
of
this
photos,
directory
new
versions
of
this
avatars
directory
and
then
point
at
the
old
caricature
and
this
new
headshot.
A
We
also
this
part's
not
implemented
yet,
but
in
the
roadmap,
including
all
of
the
events
involved
in
that
as
well,
so
that
if
you
need
to
expose
to
a
user,
hey
here's
the
changes
from
last
time-
that's
all
tracked
automatically
as
well.
A
Now
this
layout,
where
it
looks
you
know
over
time,
ends
up
looking
like
a
stream,
but
really
underneath
it's
still
just
a
dag,
that's
rooted
with
a
single
root
right.
It's
just
harder
to
think
about
it
in
this
layout
than
the
other
layout.
A
Having
the
ability
to
have
multiple
writers
concurrently
at
the
same
time,
is
really
really
difficult
without
building
a
bunch
of
stuff
on
top
of
ibld
right.
So,
if
somebody
is
writing,
you
know
a
and
then
b
and
somebody
else
comes
along
and
writes
c
and
then
a
bunch
of
changes
underneath
that
concurrently,
we
need
some
way
to
reconcile
these
automatically,
and
so
we've
made
this
work
automatically
for
file
or
directory
level
changes.
A
If
somebody
wants
to
have
inside
of
a
file
well,
we
we
don't
know
what's
inside
right,
so
it
needs
a
plug
from
the
from
the
developer
to
say.
Oh,
this
is
a
whatever
photoshop
document,
and
this
is
how
you'll
do
that
reconciliation,
because
nothing
ever
gets
deleted.
We
keep
all
of
this
history
and
if
the
automatic
reconciliation
mechanism
failed
for
some
reason
or
picked
the
wrong
version,
you
can
always
go
back
and
say
well.
Actually
I
wanted
this
other
one.
A
And
the
big
big
reason
why
we
started
doing
this
work
was
because
of
secret
files,
so
having
encryption
out
of
the
box
is
by
far
like
the
number.
One
reason
why
people
start
looking
for
tools
like
this:
it
works
fairly.
Simply
we
use
aes
keys
to
encrypt
data,
and
that
is
every
file
in
every
directory.
Each
of
those
gets
their
own
key.
This
is
based
on
an
idea
called
a
tree
which
you'll
hear
about
from
from
a
few
of
the
presentations
this
afternoon
and
what
it
does
is.
A
So
if
you
can
decrypt
the
root
of
the
file
system,
you
can
decrypt
everything
or
just
sub
directories
and
so
on,
and
you
get
this
nice
then
isomorphism,
where,
if
you
have
some
unencrypted
data,
you
can
encrypt
it
and
vice
versa.
B
Right
and
so
what
broke
just
talked
about
enables
us
to
do
a
bunch
of
cool
use
cases,
for
example,
let's
say
you're
using
an
application.
You
may
not
100
trust
it
with
all
of
your
private
files.
So
what
you
want
to
do
is
you
want
to
share
only
a
specific
section
of
a
private
file
system.
B
We
can
do
this
with
these
script
trees
by
just
sharing
the
key
for
a
certain
directory.
Let's
say,
and
as
you
can
see
here,
if
you
have
this
key,
you
can
read
all
of
the
stuff
below,
because
you
can
just
unlock
this
node
see
the
next
key
and
go
and
pause
from
there.
But
if
you
share,
let's
say
a
key
to
just
a
single
file,
very
deep
in
your
directory,
then
there's
no
way
for
someone
having
that
key
to
read
the
rest
of
the
file
system.
B
So
we
do
this
kind
of
thing,
also
in
the
time
dimension,
not
only
in
the
in
the
hierarchy.
So
essentially,
what
we're
doing
is
we
have
a
deterministic
way
of
deriving
new
keys
across
time
for
every
file
in
directory,
and
so
every
time
you
write
a
new
version
to
a
directory,
a
new
version
of
a
directory.
B
So
this
whole
kind
of
structuring
of
your
file
system
and
sharing
just
sub
parts
of
your
file
system
with,
let's
say
an
app-
poses
a
problem,
and
that
is
what,
if
you
want
to
update
the
root
of
your
file
system,
but
you
can't
actually
read
it
or
write
to
it.
Hence,
and
so
what
you
do
is
you
have
some
deterministic
way
of
addressing
new
versions
of
every
file
and
directory,
and
you
just
write
as
far
as
you
can
and
root
your
system.
B
So
one
other
problem
that
we
were
addressing
was
that
when
you
have
this
kind
of
across
across-time
ratcheting
of
the
keys
in
a
directory
or
file,
then
you
often
end
up
with
a
situation
where
you
went
offline
for
a
while
some
other
kind
of
node
you.
You
worked
on
your
laptop
for
a
while,
and
you
open
up
your
phone,
and
you
want
to
read
the
new
version
of
your
file
system.
B
Then
you
need
to
fast
forward
a
bunch
of
times
until
you
finally
arrive
at
the
new
version
of
your
file
system,
which
is
not
ideal
and
so
brooke
invented
something
that
we're
calling
a
skip
ratchet
and
it's
essentially
a
way
of
deriving
new
keys
in
a
so
that
getting
to
like
the
most
recent
version
of
your
key
is
just
a
olog
n
operation
instead
of
an
o
of
n
operation.
B
If
you
want
to
know
more
about
that,
ask
brooke,
we
have
a
paper
one
more
thing:
I've
just
talked
about
read
access
all
the
time
now,
but
we
also
want
to
have
right
access.
So
we
want
to
have
this
use
case
of
let's
say
an
operator,
let's
say
vision,
who's,
storing
data
on
behalf
of
users,
but
then
users
will
come
to
this.
B
B
That
is
based
on
bloom
filters,
we're
adding
some
random
garbage
into
it,
which
we
call
saturation
of
bloom
filters
so
that
you
can't
actually
distinguish
lots
of
bloom
filters
from
each
other
in
the
in
the
implantation
and
that
obscures
more
metadata.
B
Another
thing
we
care
about
in
terms
of
obscure
metadata
is
we
don't
actually
want
to
expose
the
dag
structure
or
the
file
system
structure
publicly.
So
when
you
look
at
a
winfs
from
like
the
public
site
and
you
browse
it
in
a
gateway
or
something,
you
shouldn't
see
how
deep
the
file
system
hierarchy
is
in
someone's
file
system
or
how
big
the
files
are,
they
need
to
be
split
up,
etc,
and
so
what
we
do
is
all
of
these
links
that
exist
between
files.
B
In
reality,
we
blow
them
away
and
instead
we
put
all
of
the
encrypted
nodes
into
some
roughly
balanced
data
structure.
That
is,
for
our
intents
and
purposes
just
say
hand,
and
when
someone
actually
goes
into
this
hand
and
starts
decrypting
nodes,
they
can
re-reconstruct
all
of
the
links
in
between
files
and
directories.
B
The
structure
we're
using
is
just
a
hammed
based,
on
the
falcon
hand,
implementation
and
with
degree
16.
We've
we've
found
that
that
works
fairly
well
and
has
a
nice
a
couple
of
nice
properties.
We
can
easily
check
upon
only
things
we
can
have
small
divs
for
updates.
A
Awesome
yeah,
so
that's
it
there's
a
couple
links
on
here.
We've
been
doing
the
majority
of
the
spec
work
in
the
development
the
last
couple
years,
but
a
few
other
teams
have
started
to
pick
this
up
as
well,
so
we've
extracted
it
out
into
its
own
working
group.
That's
github.com
need
a
file
system,
which
is
a
lot
of
characters
I
realize,
and
if
you
want
just
the
spec,
then
slash
spec
and
then
for
the
people
watching
this
in
the
future.
A
A
more
in-depth
talk
I'll
be
giving
a
a
longer
presentation
about
this.
A
strange
loop
this
year
so
that,
hopefully
in
the
future,
when
you're
watching
this,
that
will
be
up
as
well.
Great
thanks.