►
Description
Today’s web is fragile, inefficient and expensive - the distributed web is changing that. IPFS rearchitects the web to work peer to peer, addressing data by *what* it is instead of _where_ it’s located. Our latest release, IPFS 0.5, simplifies and accelerates building for web3, making the dweb accessible to more developers. Our latest performance improvements make IPFS 0.5 the fastest way to store, move and download files peer-to-peer.
A
Hello
again
glad
to
be
back
as
I
was
saying
instead
of
a
centralized
model
where
all
of
our
data
is
getting
sent
through,
a
central
server
nodes
are
connecting
to
two
kind
of
a
single
place.
Ipfs
works
with
a
much
more
resilient
model,
where
many
nodes
directly
connect
to
each
other
servers
and
users,
and
they
can
even
work
when
the
network
is
subdivided
or
there
are
our
faults,
and
you
know
say
some
of
us
get
disconnected
from
each
other.
We
can
still
access
and
use
our
applications.
A
This
really
focuses
around
the
concept
of
content
addressing
so
instead
of
addressing
content
by
where
it's
located
the
network
or
who's
hosting
it
say
example.com,
it
said
addresses
data
by
what
it
is,
and
this
is
a
small,
fundamental,
very
critical
change
in
the
way
that
we
address
data
on
the
Internet.
It
allows
us
to
have
kind
of
verification
that
the
data
that
we're
loading
is
what
we
meant
to
get.
A
A
The
IP
MS
ecosystem
is
now
full
of
hundreds
of
different
projects
and
applications
and
tools
that
are
building
on
top
of
this
decentralized
fabric.
Lots
of
different
categories
are
met
here,
from
identity
to
marketplaces,
to
content,
distribution,
social
media.
All
sorts
of
applications
are
building
on
top
of
IP
FS
to
make
web
very
possible.
A
A
We
also
the
the
IPF
escort
team
runs
one
of
many
ipfs
HTTP
gateways
to
bridge
content
from
the
ipfs
network
to
everyone
who
is
accessing
it
from
HTTP
today
so
kind
of
your
traditional
web
browser
world
and
just
the
one
we
run,
which
is
one
of
many
CloudFlare
ones
from
runs.
One
a
number
of
other
groups
do
as
well
and
serves
about
13
million
requests
a
day
about
5
terabytes
worth
of
data,
so
definitely
a
an
exciting,
exciting
growth
trajectory
there
as
well
for
making
this
more
accessible
to
many
groups.
A
So
I'm
going
to
talk
through
a
couple
of
highlights
of
the
cool
stuff,
that's
happened
in
q1
in
ipfs
ecosystem
and
then
I'll
dive,
particularly
into
one
of
those
that
ipfs
it
would
at
5:00
at
least
so
about
a
month
ago,
a
little
over
a
month
ago,
opera
for
Android
added
default
support
for
ipfs,
and
this
is
actually
a
really
important
part
of
our
browser
upgrade
path.
We've
worked
with
a
lot
of
browsers
in
the
past,
Mozilla
brave
and
others
in
order
to
bring
ipfs
support
to
people
right
default
within
your
browser.
A
There's
a
lot
of
tools
and
extensions
and
techniques
to
make
web
3
more
accessible,
but
really
default.
Inclusion
in
a
browser
is
the
way
to
go
in
terms
of
making
sure
we
have
the
market
and
ecosystem
that
can
utilize
these
tools,
and
so
it's
really
awesome
to
see
opera.
Add
that
support
and
very
excited
to
continue
working
with
other
browsers
as
part
of
that
upgrade
path,
to
bring
a
kind
of
built-in
ipfs
nodes
to
every
browser
that
folks
use
and
love.
A
Speaking
of
browsers,
brave
foot
there,
there
are
swag
store
up
on
ipfs,
like
three
days
after
that,
so
very
exciting.
To
see
the
this
continued
movement
they're
using
the
ordered
origin
marketplace,
the
swag
story
that
they
created,
which
is
also
phenomenal
and
beautiful,
very
slick,
good
Shopify
competitor
and
built
on
top
of
IP
FS.
So
now
you
can
check
out
and
buy
all
of
your
favorite
brave
swag
right
directly
over
your
IP
FS
node.
We
also
saw
other
groups
coming
and
doing.
A
Ipfs
integrations
for
storage,
so
Wolfram
language,
their
default
external
storage
option
is
ipfs
there.
Their
alternate
for
ipfs
is
dropbox
so
great
to
secrets
going
full
web
3
first
into
kind
of
built
building
in
default,
peer-to-peer
support
into
the
the
work
they're
doing.
We
also
saw
a
number
of
websites
coming
and
making
use
of
ipfs
as
a
way
to
host
a
decentralized
site,
and
so,
if
tyrion
org
put
their
view
press
website
up
on
IP,
FS
and
ens
earlier
this
year.
A
So
now
you
can
go
to
syria
mute
link
or,
if
you
happen
to
be
running
either
meta
mask
or
companion
or
a
number
of
other
ipfs
companion.
A
number
of
other
extensions.
You
can
just
do
a
theory,
I'm
linked
directly
and
sorry,
a
theory
about
youth
directly,
and
that
will
resolve
for
you
so
excited
to
to
see
more
of
these
groups
coming
on.
This
is
really
mean
possible
by
how
easy
it's
gotten
to
put
your
personal
website
up
on
something
like
ipfs
I.
Personally.
A
Did
this,
like
a
month
and
a
half
ago
using
fleek,
which
just
directly
connects
to
a
github
repo
that
you
have
your
personal
site
in?
Could
be
using
something
like
Hugo
could
be
just
straight-up
HTML
CSS
like
mine,
and
it
automates
the
whole
website
deployment
process.
So
every
PR
or
every
update
you
make
have
that
repo
automatically
deploys,
and
so
it's
you
know
a
couple
of
clicks
of
a
button
and
then
boom
your
site
is
up
on
ipfs
and
they'll.
A
Even
do
you
know,
custom
domains
SSL
all
of
those
useful
configurations,
and
now,
if
you
want
to
take
it
a
step
further,
you
can
also
add
and
of
a
decentralized
URL
or
or
name
for
your
site,
and
so
I
have
mo
macdaddy
a
lot
of
other
people
have
daddy
domains,
and
now
you
can
actually
upload
directly
within
the
ENS
manager
in
order
to
upload
folders
and
files
and
web
sites
to
ipfs
through
this
portal.
Similarly,
for
crypto
addresses
and
Godzilla
addresses,
you
can
use
their
IP
of
s
hash
uploader.
A
So
the
UX
here,
the
the
tooling
that's
being
created
for
web
2,
is
just
a
really
important
part
of
this
upgrade
path
for
making
web
3
reach
in
to
the
web
to
model
and
reach
developers
who
wouldn't
otherwise
come
to
this
space
and
understand
how
to
make
use
of
these
tools
and
and
gain
access
to
them
without
having
to
you
know
fully
grok
how
a
DHT
works
or
understand.
You
know
why
decentralize
decentralisation
is
better,
they
can
start
getting
access
today.
A
Another
very
cool
development
in
the
q1
timeline
was
a
group
called
equilibrium
labs
and
started
a
rust,
ipfs
implementation.
As
part
of
our
o
dot
5
release.
We
learned
there's
a
chunk
of
folks
who
thought
that
back
in
2013,
we
should
have
written
ipfs
in
rust
from
the
gecko
instead
of
goaling,
which
was
our
initial
implementation.
I
think
rust
was
like
not
even
able
to
compile
itself
back
in
2013,
but
regardless
there
is
now
a
rust
ipfs
implementation
that
folks
are
upgrading
and
adding
to
and
a
great
place
to
get
started.
A
So
speaking
of
ODOT
5,
there's
a
ton
of
stuff
that
landed
just
last
Tuesday
in
our
ODOT
5
release.
There's
many
different
parts
of
the
ipfs
upgrade
flow.
The
ways
that
you
can
add
and
find
data
in
thank
a
vast
network
that
were
improved
and
so
I'm
excited
to
dive
into
them.
A
little
bit
with
you
and
talk
about
how
we
did
some
of
them,
the
first
and
most
major
aspect
of
ipfs
that
we
improved
in
this
release
is
content
routing.
A
That's
how
you
find
information
that
you
care
about
in
the
network,
so
you're
gonna
load
a
website.
How
do
you
find
the
data
associated
with
whatever
ipfs
hash
that
that
website
actually
is
under
the
hood
and
then
be
able
to
fetch
that
data
from
someone?
There
are
a
number
of
areas
that
we
improved
in
order
to
improve
content,
routing
performance.
A
The
first
of
those
was
auto
nap,
so
I've
mentioned
that
worker
30
X
last
year,
which
is
a
lot
and
as
part
of
that,
we
ended
up
having
a
lot
of
nodes
in
our
network
that,
were,
you
know,
maybe
not
the
best
people
to
rely
on
in
the
long
term,
for
routing
requests
back
and
forth.
We
ended
up
with
a
lot
of
notes
behind
mats
or
home
firewalls
nodes
that
we're
coming
in
and
out
very
you
know,
dynamically
say
you're
like
loading
a
site
and
then
loop,
you're
off.
A
This
also
helps
a
little
bit
as
folks
are.
Maybe
you
know
coming
online
very
temporarily
that
you,
you
won't
end
up
becoming
a
server
and
then
suddenly
disappearing
from
people's.
You
know
routing
records,
and
so
this
is
the
the
solution
we
use
to
actually
clean
up.
The
folks
participating
in
the
DHT
and
make
it
much
faster,
because
that
way,
you
don't
have
to
wait
for
dialing
many
different
people
in
the
network
in
order
to
you
know,
find
someone
who
will
actually
respond
to
your
routing
request
so
that
you
can
get
to
the
point
you're.
A
Looking
for
in
the
DHD
another
really
important
part
of
the
content,
routing
improvements.
We
made
making
sure
that
when
we
were
trying
to
look
around
this
network,
the
peers
that
we're
asking
for
routing
information
are
high
quality
and
a
challenge
we
had
with
so
many
more
peers
in
the
network
is
that
we
were
dropping
some
kinds
of
really
good
peers
who
are
giving
us
lots
of
useful
data
and
really
well-connected,
often
the
bottom
of
our
routing
tables,
which
is
the
information
about
peers.
A
So
upgrading
our
routing
tables
was
super
important
as
well
for
our
lookup
algorithm.
So
now
we
can
do
a
much
better
job
at
querying
the
closest
peers
and
getting
fast
responses
to
to
questions
about.
Like
you
know,
who
is
going
to
have
a
record
for
this
file
that
I'm
looking
for,
and
so
instead
of
you
know,
dialing
many
peers
and
failing.
We
can
do
this
a
lot
faster
as
part
of
this.
These
are
pretty
major.
A
You
know
low
level
changes
to
the
IP
of
us
peer
to
peer
network
and
because
of
that,
we,
instead
of
launching
changes
to
a
public
network
of
hundreds
of
thousands
of
nodes
and
crossing
our
fingers
and
hoping
it
worked.
We
built
up
tooling,
and
so
we
built
this
project
called
test
ground
which
you
can
actually
hear
about
from
Raul
tomorrow
in
order
to
make
sure
that,
as
we
were
building
as
we
were
going
along,
we
were
making
the
network
better
and
we
can
prototype
and
evaluate
in
thousand
node
simulations
and
clusters
with
real
live.
A
You
know
things
like
Network,
jitter
things
with
multiple
different
versions,
all
sorts
of
configurations
that
helped
us
simulate
a
real
live
Network,
and
through
that
we
were
able
to
get
a
lot
of
benchmarks
and
data
about
how
we
made
the
IP
FS
network
n,
tht,
better
so
first
off
when
it
comes
to
providing
data
to
the
network.
So
you
want
to
announce
that
you
have
your
hosting
a
website.
You
can
see
the
average
and
95th
percentile
for
the
sorts
of
tests.
We
were
running
in
test
ground
coffee
up.
A
We
were
able
to
make
it
something
between
two
to
six
times
faster,
depending
on
whether
you're,
looking
at
average
time
or
95th
percentile
time,
but
definitely
shortening
shortening
the
the
journey
that
you
have
to
make
to
find
nodes
with
good
content
and
reducing
the
number
of
peers
you
dial,
who
aren't
able
to
respond
to
you
and
then
finally
for
IP
NS,
which
is
our
mutable
naming
system.
This
is
kind
of
the
the
aspect
where
you're
wanting
to
look
up
all
of
the
the
nodes
who
are
hosting
data.
We
made
this
about
five
five
times
faster.
A
So
when
you
look
at
it
all
together,
here
is
actually
some
of
our
graph
autographs
of
kind
of
some
of
the
nodes
that
we
saw
in
the
network
of
before
and
after
the
ODOT
five
release,
which
was
on
April
28,
and
you
see
that
the
number
of
dials
and
queries
we're
making.
He
goes
way
down.
Success
rate
goes
way
up,
provide
time
and
find
time
both
go
down,
and
so
you
can
see
we
were
previously
in
the
you
know.
A
Eight
eight
second,
on
average,
for
the
previous
release
and
we've
brought
that
down
to
one
and
a
half
seconds
on
average,
so
still
room
to
go
still,
definitely
space
for
improvement,
but
definitely
a
significant
improvement
and
and
the
sort
of
network
that
you
can
then
rely
on
and
use
for
dynamic
applications.
So
you
know,
numbers
are
very
variable
and
especially
as
the
network
continues
to
upgrade,
we
expect
them
to
keep
getting
better
because
remember.
A
This
is
all
dependent
on
the
number
of
nodes
running
this
new
code
and
running
kind
of
the
new
and
improved
DHT
logic.
So,
as
more
of
the
network
is
made
up
of
those
good
know
that
are
responsible
about
whether
or
not
they
joined
the
THD,
the
better,
alright,
so
contact
routing
aside,
there
was
also
a
ton
of
other
stuff
that
landed
as
part
of
this
release.
A
major
part
here
was
content
exchange.
So
I
mentioned
briefly
the
work
we
did
with
with
Netflix
and
making
bit
swap
better.
A
The
main
thing
we
did
here
was
we
added
a
new
type
of
request,
so,
instead
of
requesting
directly
for
data,
so
we
called
those
wants.
I
want
this
data,
and
all
you
can
do
was
was
ask
the
network
like
hey
I,
want
this.
We
added
a
new
message
type.
So
now
you
can
check
whether
people
have
data
before
actually
fetching
it
from
them
which
allows
you
to
do
much
better
parallelization
and
more
intelligently
request
data
just
from
people
you
know
already
have
it.
A
So
now,
with
large
swarms
of
leeches
and
peers,
you
can
seed
an
image,
a
container
image
in
this
case
we're
for
the
test
we
were
doing
with
with
Netflix
much
faster
and
we
actually
ended
up
optimizing,
our
the
the
benchmark
that
we
cared
about,
our
on
container
images,
I
think
to
being
three
to
four
X
faster
than
docker
hub.
So
it
definitely
made
significant
improvements
there
bit
swap
is
the
data
transfer
algorithm
we
use
today,
but
graph
sync?
Is
the
data
transfer
algorithm
of
the
future?
A
From
report
export
perspective,
we
made
adding
data
to
ipfs
a
lot
faster,
so
badger,
which
is
one
of
our
experimental
data
stores
for
IP
of
us.
It's
been
around
for
a
long
time.
Many
folks
use
it
with
pretty
heavy
requirements
around
performance.
We
made
it
I
think
many
over
2x
faster,
using
badger
2
to
3
x,
depending
on
whether
you're,
using
an
SSD
or
in
HD,
and
so
we
were
able
to
do
that
because
we
switched
from
synchronous
rights
to
asynchronous
rights.
We
discovered
this
because
when
we
were
benchmarking
on
Mac
OS
and
Linux
vs.
A
Windows
you're,
seeing
a
2x
discrepancy
and
how
fast
it
was
turns
out,
Windows
didn't
even
support
synchronous
rights,
and
so
that
was
kind
of
led
us
to
identifying
this
as
a
way
that
we
could
improve
performance
for
every
one
note,
badger
is
not
enabled
by
default,
and
if
you
want
to
use
the
improved
performance
aspects
of
badger,
please
enable
it.
It
you'll
see
I
think
something
like
a
30x
performance
improvement
over
top
of
flat
FS,
but
it's
not
for
free.
There
are
still
a
couple
of
bugs
around
like
larger
memory.
A
A
This
uses
the
new
car
file
format
that
has
come
out
of
that
he'll
be
team
which
allows
you
to
kind
of
identity,
clearly
describe
how
a
graph
of
data
is
laid
out,
and
so
this
lets
you
import
an
export
time
between
different
nodes,
actually
can
be
a
very
fast
way
to
transfer
data
between
two
nodes.
Is
you
can
export
an
entire
car
file
and
import
it
into
another
node?
A
We
also
made
a
lot
of
changes
to
Lapita.
P
Auto,
not
as
I
was
mentioning
before,
is
like
a
kind
of
a
core
part
of
how
we
improved,
which
nodes
are
participating
in
the
IP
of
s
DHT,
and
so,
therefore,
we
needed
to
use
auto
and
add
a
lot
more.
So
we
improved
the
the
auto
net
service
itself.
We
have
faster
NAT
detection,
better
rate
limiting,
since
many
more
nodes
are
acting
as
Auto
nack
peers
and
it's
now
enabled
by
default
on
the
allgäu
FS
nodes.
A
A
We
also
upgraded
our
experimental,
quick
support
to
the
latest
draft.
It's
gonna
be
our
final
fingers
crossed
final
version
before
we
stabilize
it,
and
we're
also
planning
to
enable
it
by
default
in
the
next
release,
which
is
coming
in
a
little
bit
less
than
six
weeks
now,
and
so
very
excited
to
see
that
very
important
for
things.
B
A
Sites
all
right.
Last
but
not
least,
we
made
a
couple
of
additional
improvements
to
IP
NS
other
than
just
generally
making
it
faster.
Thanks
to
all
of
the
DHT
improvements
we
added
ENS
support,
so
you
can
now
do
I,
P
of
s,
dot,
eath,
dot,
I,
P
and
s
dot,
localhost
and
just
generally
supports
dot.
Eath
TL,
DS
out
of
the
box
I
believe
that
there's
a
PR
landing
imminently
for
dot
crypto
as
well,
so
we're
seeing
more
of
these
decentralized
TLD
is
landing
with
an
IP
FS
as
well.
A
A
There's
more
and
more
things
to
talk
about
as
part
of
this
release,
I
could
go
on
for
a
long
time,
but
I
also
would
love
to
answer
people's
questions
so
I'll
leave
you
two
to
look
at
these
remaining
things
asynchronously
at
another
time
and
I'll
direct
you
to
the
documentation
site.
We
actually
have
a
whole
section
specifically
about
this
latest
release
the
features
how
to
upgrade
and
the
changelog.
A
So
go
give
it
a
go,
give
it
a
try,
send
us
your
benchmarks,
tell
us
how
much
better
it's
gotten
and
excited
to
hear
and
see
what
you're
going
to
build.
So
thank
you
very
much
for
the
time
and
listening
and
I
think
I've
been
staring
at
the
wrong
chat
this
entire
time,
so
I'm
gonna,
try
and
toggle
over
the
right
place
and
see
if
there's
any
questions
that
can
answer
in
my
four
minutes
remaining
in
this
time
really
want
to
ask
a
question
about
costs.
A
Know
know
about
so
every
when
you
think
about
nodes
in
the
ipfs
network.
An
important
thing
to
keep
in
mind
is
like
the
data
that
you're
hosting
the
node.
That
you're
running
is
kind
of
your
own
responsibility,
and
so,
when
you
think
about
kind
of
choosing
what
data
you
host
on
your
node
or
the
kind
of
costs
associated
with
hosting
data,
it's
kind
of
your
responsibility.
If
you
instead
want
to
launch
your
website
and
then
run
away,
walk
away,
not
pay
any
more
attention
to
it.
A
Close,
your
laptop
then
making
sure
that
you
persist
that
content
somewhere
in
the
ipfs
network,
whether
it's
a
a
collaborative
cluster
of
other
like-minded
humans
who
want
to
help,
keep
your
website
online
or
say
your
Pennock
paying
a
pending
service
like
Kenyatta
or
inferior
temporal,
or
you
are
paying
another
decentralized
network
in
order
to
store
your
content,
something
like
file
coin
or
others.
Those
are
different
ways
to
make
sure
your
data
gets
persisted.
A
Yes,
I
can
also
share
out
the
presentation
after
the
fact
behind
that's
a
good
question
about
data
behind
mats.
We
do
a
lot
of
NAT
hole
punching
and
so
with
with
minimax,
we
can
break
through
them
and
set
up
direct
connections.
We
also
have
a
relay
service,
which
we
run
a
couple
of
nodes
and
in
a
couple
of
other
groups
who
run
relay
nodes
as
well,
which
help
relay
connections
between
nodes
that
are
behind
mats,
and
so
this
does
work
in
many
cases,
but
also
not
hole.
A
Punching
is
hard,
so
it
does
not
always
work
and
we
have
a
couple
of
tips
and
tricks
for
how
how
to
identify
if
you're,
behind
an
ad
and
then
what
to
do
about
it.
So
the
request
the
question
here
is,
you
know,
is:
is
there
any
way
for
you
to
control,
saying
you
upload
a
file
or
an
image,
and
so
that
results
in
a
particular
hash
to
then
have
be
able
to
scrub
that
hash
or
the
data
associated
with
hash
from
other
peoples
nodes
somewhere
else
on
the
network?
A
No,
you
don't
have
any
control
over
other
people's
machines
just
over.
You
know
as
the
Internet
today,
where
say
someone
can
screenshot
or
download
or
otherwise
grab
your
from
your
website
or
Facebook
or
from
any
other
service,
and
you
can't
wipe
it
from
their
machine
remotely.
The
same
goes
for
kind
of
thinking
about
the
D
Web
and
you
you
know,
there's
also
no
way
of
knowing
like
maybe
they
had
a
valid
copy
of
that
data,
that
they'd
gone
and,
and
so
no
individual
has
control
over
the
than
the
other
people's
nodes.
A
B
Well,
thank
you,
so
much
Molly
would
really
appreciate
it.
It's
awesome
to
continue
to
dive
into
ipfs
and
really
thank
you
for
your
time.
You
know
between
you
and
Juan
have
really
showed
you
know
the
capabilities
and
what
you
guys
done
for
the
iterative
improvement.
So
thanks
for
all
y'all's
contributions,
thus
far.