►
From YouTube: Feather - A File-oriented IPFS Light Client - Jorropo
Description
Explain how we store files on IPFS and how this can be used to make IPFS light clients.
A
Hi
I'm
geropo
I
work
for
protocol
labs
and
today
I
want
to
talk
to
you
about
lifelines,
Unix,
fs
and
other
stuff.
So
the
first
testing
point
is:
what's
the
phone,
my
team,
like
a
simple
question,
but
many
many
people
use
phones.
This
is
start
I
found
online,
so
we
probably
because
we're
building
the
new
decentralized
web.
We
want
people
that
are
using
phones
to
be
able
to
access
the
Json
service
web
in
a
decentralized
manner.
So,
however,
this
post
technical
challenge
for
us
because
they
have
very
low
Ram,
not
much
CPU.
A
The
network
is
expensive
because,
like
many
phone
plans
have
data
caps
in
place
and
you
don't
want
to
drain
the
battery,
this
is
not
very
compatible
with
the
main
implementation
I
am
working
on,
which
is
Kubo.
It
is
lots
of
ram.
It
depends
the
CPU
a
lot.
The
network,
your
rent,
continues
in
the
background
and
again
the
battery
will
drank
fast.
So
you
probably
don't
want
to
use
Kubo
on
your
phone
and.
A
So
maybe
a
thing
we
can
take
a
look
at
is:
what's
Unix
FS.
This
is
a
data
format
we
use
to
store
files
on
ipfs.
Maybe
that
would
give
us
insights
in
why
Kubo
use
so
much
resources.
A
The
first
point,
which
is
not
actually
Unix
FS,
but
that's
what
is
a
CID
and
it's
the
key
we
use
to
link
to
files
in
ipfs.
It
has
multiple
fields.
The
multibase
key
tell
you.
How
do
you
decode
the
data
into
binary,
then
once
it's
in
binary,
you
can
parse
it
to
give
us
a
author
thing
about
the
CID,
the
version
which
is
always
one
kinda.
A
The
codec
is
an
important
thing,
we'll
see
why
later
and
the
hash
give
us
the
algorithm
and
the
digest,
which
is
the
result
of
the
hash
and
the
hash
allows
is
to
do
content
Integrity.
So
this
is
the
hash
of
the
block.
This
is
pointing
to
so.
The
most
simple
kind
of
codec
in
forest
in
Unix
FS
is
raw.
It's
literally
simply
just
some
bites.
A
A
The
more
complex
one,
which
is
where
You
Can
Do
Magic,
is
that
protobuf.
This
is
a
protobuf-based
format
and
it
has
many
field,
the
first
one
type
right
now
we
have
a
file,
but
we
can
also
have
directories
and
other
stuff.
It
has
some
data,
which
is
the
start
of
the
it's
data
that
contain
in
the
file.
A
So
it's
the
content,
the
content,
some
of
the
content
of
the
file
and
the
magic
part
is
a
links
part
which
contained
other
cids,
we're
pointing
to
so
right
now
we
have
two
cids,
and
so
this
is
concrete,
connecting
both
the
data
and
also
files.
So
right
before
we
were
limited
to
two
megabyte
per
file,
but
now
we
can
take
many
of
those
two
megabit
files
and
join
them
together
in
bigger
files
and
because
you
can
also
join
you,
don't
have
to
join
raw
files.
You
can
also
join
Portable
Files.
A
You
can
keep
joining
them
together
into,
however
big
you
want
so
now,
we'll
just
walk
to
decoding
fight
together
to
see.
Maybe
where
is
the
expense
in
there?
So
we
start,
we
have
a
CID
gmq,
we
decode
it.
We
see
it
has
many
fields,
version
zero.
It
has
a
hash
and
the
codec
is
that
baby.
So,
let's
assume
that
this
is
a
content
of
that
tid
we
decoded.
First,
we
find
the
data
field,
which
is
this
so
we
found
this.
A
Then
we
look
at
the
links
field,
which
will
give
us
also
all
the
file
we
have
to
download.
So
we
first
download
the
first
of
the
two
links.
This
one
is
raw,
so
we
don't
have
photograph.
We
directly
found
the
content
and
we
added
to
the
file
we
are
reading.
We
repeat
again
same
thing:
it's
raw
file,
and
now
we
have
the
complete
sentence
which
is
kind
of
boring,
but
this
was
extremely
easy.
A
The
thing
the
in
the
thing
we
work
we
just
did
we
had.
We
just
walked
what
is
called
a
miracle
dag
so
because
we
have
hashes
that
point
to
data,
so
I
point
to
hoser
hashes.
So
that's
called
a
Miracle
Construction
and
diag
is
in
the
sense
that
you
can
point
it's
a
graph.
Basically,
we
can
point
to
the
same
fire
multiple
times
and
there
is
no
cycle
that,
because
it's
cryptographically
secure
every
time
we
verifies
the
hash.
A
We
also
ensures
that
all
of
the
content
is
the
original
content
we
wanted
to
download
and
because
we
are
able
to
take
very
big
file
and
break
them
into
lots
of
smaller
files.
We
can
do
parallel
download
if
we
want.
A
If
we
had
like
a
single
hash
for
the
complete
file,
we
would
need
to
every
time
we
couldn't
download
it
for
multiple
peers,
because
you
only
know
if
a
piece
of
content
is
correct
once
you
downloaded
the
complete
block
and
if
your
block
is
like
20
Gigabytes
and
you
download
it
from
multiple
people,
you
don't
know
who
is
cheating
and
who
is
not,
but
because
we
break
down
the
blocks
in
lots
of
small
blocks.
A
We
after
two
megabytes
we
always
know
if
the
data
is
correct
or
not,
because
it
must
match
the
CID.
We
were
downloading,
but,
more
importantly,
what
we
did
was
just
shuffling
a
few
bytes
around
parsing,
some
photograph
hashing
fire
hashing
bytes
is
not
very
expensive,
like
this
really
shouldn't
be
expensive,
so
the
chapter
three
is:
what,
if
you
just
did
that
so
I
walked
you
through
how
your
we
are
decoding
files
I
will
so
I
wrote
an
implementation
to
just
do
that.
A
So
for
that
we
start
to
get
into
question
of
like
how
we
actually
get
the
file,
so
we
are
going
to
cheat.
This
is
a
list
of
kindness,
some
stuff
that
Kubo
is
doing
we're
just
not
going
to
do
any
of
that.
A
We
are
going
to
use
the
trustless
Gateway,
which
is
a
thing
that
was
added
this
year,
I
think
to
the
Kubo
Gateway
that
allows
us
to
download
a
cut
file
on
Roblox,
and
the
key
point
to
this
is
that,
because
we
are
downloading
the
raw
Unix
FS
representation,
we
can
do
the
process.
I
just
worked
you
on
of
hashing
the
blog
verifying
It
Boxing
it
and
continuing
that
way,
and
so
my
implementation,
which
I
written
in,
go
it.
In
my
opinion,
it
fits
all
of
the
requirement
asset
at
the
SR.
A
It
is
very
little
Ram.
The
CPU
usage
is
hash
nominated.
That
means
that
the
the
thing
you
spend
the
most
time
on
is
actually
hashing
the
data
which
you
probably
cannot
get
around,
because
you
want
to
know
if
your
data
you
can
do,
that
is
correct.
So
that
is
not
really
a
thing.
We
can
work
on
the
network
because
my
implementation,
user
Gateway
it
has
no
background
process
it.
Just
when
you
want
to
download
the
file
from
ipfs,
you
contact
the
Gateway.
A
A
How
we
can
improve
this,
maybe
would
be
nice
to
see
some
medium
weight.
Implementation,
for
example,
I,
think
kind
of
like
what
fabric
is
presented
earlier
with
iro
in
Mobile
peer-to-peer
is
not
that
expensive.
What
is
expensive
is
only
a
few
stuff
like
helping
other
people
by
sending
them
tire.
So
it's
expensive,
because
you
need
to
respond
to
random
people
that
ask
you
hey.
Do
you
have
this?
So
what
if
we
didn't
do
that
and
the
DHT
also
the
DHT,
is
distributed
hash
table.
A
So
that's
the
way
we
find
files
in
ipfs
they
also
Alternatives.
One
of
them
is
reframe,
which
is
similar
to
the
to
the
Gateway,
but
instead
of
requesting
fire,
you
request
locations
of
file.
So
when
you
have
a
reframe
server,
you
can
ask
it.
Hey.
I
would
like
to
know
where
this
file
is
stored
and
a
medium
weight.
Implementation
would
be
awesome
because,
instead
of
contacting
a
Gateway
that
then
has
to
contact
the
storage
providers
or
a
pinning
service.
You
would
instead
contact
reframe
server.
A
We
also
want
it
in
other
languages.
I
rated
him
go
because
it's
easy,
however,
go
is
not
very
easy
to
use
if
your
project
doesn't
already
use
go
so
our
Big
Brand
time
costs
and
so
marquiser
is
working
on
a
Russ
implementation.
I
think
we
also
want
one
in
C,
because
it's
ubiquitous.
A
Ideally,
we
want
to
do
F5
from
rust,
not
Zig,
and
maybe
your
favorite
language.
It's
surprisingly
easy
to
do
my
implementation.
300
lines
is
not
quite
correct,
so
in
a
few
in
a
thousand
line
of
code,
you
should
be
done.
If
you
need
some
help,
got
done
to
me.
I
will
be
very
happy
to
help
you
run
in
Java
or
whatever.
A
What
I
did
here
is
from
my
experience.
I
knew
how
Unix
office
worked,
how
I
could
make
a
fast
implementation,
so
I
did
it.
However,
it's
not
very
scalable
to
teach
everyone
all
the
time,
so
I'm
also
writing
some
specs,
which
is
part
of
a
global
efforts
on
ipfs
to
make
things
more
standardized
and
help
people
that
want
to
implement
their
own
implementation
and
more
tailor
do
they
need
if
you
have
links
to
their.
You
have
links
to
this
code.
A
If
you
want
to
see
my
implementation,
it's
very
Alpha,
it
doesn't
really
work
well
in
the
Unix
of
aspects
that
is,
that
is
still
in
progress
will
be
done
one
day.
Hopefully,
I'm
done.