►
From YouTube: CSCON[0] Molly Mackinlay - IPFS & libp2p in 2020
Description
Molly Mackinlay from Protocol Labs gives a year in review for IPFS and libp2p in 2020.
At The First Ever CSCON[0] Virtual Event!
A
Our
next
speaker
is
absolutely
brilliant,
yet
again,
another
dear
friend
of
chain
safes,
there's
a
talk
on
ipfs
and
liberty,
p
in
2020
from
protocol
labs,
molly,
mcnally,
molly.
B
B
Well,
excited
to
chat
with
y'all
today
wanted
to
give
a
fast,
lightning
tour
of
all
the
really
exciting
stuff.
That's
happened
with
ipfs
and
lib
p2p
in
the
past
year.
I
think
probably
a
lot
of
folks
last
engaged
more
heavily
with
ipfs,
maybe
around
the
april
may
timeframe
when
we
had
our
big
ipfs
0.5
launch.
So
I'll
recap
that
briefly
and
then
also
dive
into
some
of
the
cool
stuff.
That's
happened
in
q
or.
B
So
for
those
of
you
who
aren't
super
familiar
with
ipfs,
it
stands
for
the
interplanetary
file
system
and
it
aims
to
take
kind
of
the
centralized
http
web.
Where
many
different
end
users
must
all
connect
to
the
same
server.
In
order
to
get
the
data
to
a
more
distributed
peer-to-peer
system
where
you
can
have
subnets
operating
that
are
partially
offline,
you
can
have
a
much
more
flexible
system
of
nodes
directly
accessing
content
on
each
other.
B
The
way
it
does
that
is
by
instead
of
addressing
location
data
by
where
it's
located
the
network
using
location.
It
instead
uses
content
addressing
in
a
global
namespace
to
be
able
to
move
data
around
a
network
and
uniformly
verify
that
the
data
you're
fetching
from
any
random
node
in
the
network
is
the
data
you're.
Looking
for,
and
this
aims
to
address
a
whole
ton
of
different
problems.
B
It's
been
used
a
good
bit
in
kind
of
either
offline
first
use
cases
or
in
efficiency
use
cases
or
in
places
where
you
really
need
to
verify
that
the
data
that
you're
time,
stamping
into
a
blockchain,
for
example,
is
the
data
that
you
mean
to
and
there's
a
whole
ton
of
security
benefits
that
come
with
that
as
well
also
helps
a
lot
in
the
resiliency
robustness
case
of
kind
of
links
breaking
because
something
got
moved
from
server
a
to
server
b,
when
you're
content
addressing
data
that
same
content
will
always
have
the
same
name,
and
that
helps
a
lot
in
in
a
ton
of
important
use
cases,
and
so
there's
lots
of
fun
problems
to
solve
here.
B
Ipfs
is
part
of
kind
of
this.
What
I've
been
colloquial,
calling
the
interplanetary
stack,
but
the
set
of
protocols
that
build
on
top
of
each
other
in
order
to
make
kind
of
a
distributed,
peer-to-peer
web
possible,
and
so
ipfs
builds
really
heavily
on
top
of
lib
p2p.
Also
talk
about
a
few
folks
who
are
using
justly
p2p,
say
to
power
various
blockchains
that
are
have
been
launching
over
the
past
couple
of
months.
B
It
also
uses
ipld,
which
is
the
data
layer
for
how
we
actually
form
files
and
dags
distributed
async,
basically
graphs
of
data
that
we
can
then
move
around
the
network
and
then
falcoin
also
builds
on
top
of
this
whole
stack
and
adds
an
incentive
layer
for
fetching
and
finding
content
in
the
network.
B
As
I
mentioned
briefly,
lib
p2p
is
kind
of
this
modular
transport
layer,
and
so
it's
actually
made
up
of
a
whole
ton
of
different
sub
protocols,
which
then
fit
into
kind
of
finding
and
and
exchanging
data
with
nodes
within
the
network
and
setting
up
those
sorts
of
connections
and
is
used
super
heavily
and
a
lot
of
the
improvements
that
we
made
to
ipfs
over
the
past
year,
actually
involved
kind
of
the
the
deep
changes
and
improvements
within
lib
p2p
that
then
flowed
into
benefits
up
the
stack
in
ipfs.
B
There
are
a
whole
ton
of
people
who
are
building
on
top
of
ipvs.
Now
we
we
actually
have
an
open,
sigma
diagram
where
anyone
can
can
add
new
applications
to
our
ipfs
ecosystem
view,
there's
a
ton
of
new
ones
here,
all
from
kind
of
the
content
creator
side
with
folks
who
are
doing
video
or
audio
or
other
kind
of
content.
B
So
lots
of
expansion
here,
even
just
in
in
the
past
year-
and
we
also
last
year-
saw
a
lot
of
growth
in
the
the
wider
network
as
well,
and
we're
we're
excited
to
see
that
many
of
those
nodes
have
also
upgraded
to
our
most
recent
release,
which
or
maybe
since
0.5,
which
is
our
our
may
release.
A
number
of
folks
have
upgraded
since
then.
B
So
we
have
hundreds
of
thousands
of
nodes
in
the
network,
many
of
whom
are
are
running
the
the
most
recent
improvements
to
ipfs,
and
we
also,
in
addition
to
running
kind
of
this
distributed
network
of
nodes,
also
run
an
http
gateway
whereby
we
make
the
content
living
within
these
nodes
within
that
network
accessible
over
http
to
all
sorts
of.
B
You
know,
classic
browsers
that
are
not
yet
ipfs
enabled
and
where
we
run
one
of
many
gateways,
cloudfire
runs
a
gateway,
a
number
of
pinning
services
and
other
tools
also
run
gateways
like
pinata
and
fura
and
others
just
just
on
our
little
subset
of
this
we're
seeing
really
massive
amounts
of
people
who
are
utilizing
that
tool,
which
demonstrates
there's
even
more
valuable
applications,
building
on
top
of
ipfs
and
putting
valuable
content
within
within
that
network
and
wanting
to
make
it
accessible
and
have
a
smooth
upgrade
path,
kind
of
into
that
web3
world,
and
there
the
the
data
has
also
increased
very
precipitously,
so
lots
of
exciting
data
being
being
used
and
shared
there.
B
So
the
main
meat
of
what
I
wanted
to
talk
to
you
guys
about
today
were
some
of
the
awesome
trends
we're
seeing
in
2020
across
across
this
stack
kind
of
starting
with
q1,
with
the
d
website
explosion.
B
A
lot
of
amazing
tooling
here
made
massive
gains
and
and
ease
access
to
putting
websites
and
web
tools
on
top
of
ipfs,
and
so
we'll
talk
about
that
briefly.
Q2
is
when
we
made
a
whole
ton
of
performance
improvements
to
ipfs
released
our
biggest
upgrade
to
the
protocol
in
a
long
time
which
brought
a
lot
of
performance
improvements
to
all
different
parts
of
the
ipfs
data
cycle.
B
Q3
was
a
a
lot
of
upgrading
some
of
the
the
standards
both
within
ipfs
and
and
the
way
that
ipfs
works
and
connects
into
other
ecosystems
and
making
some
some
new
friends
bringing
some
new
folks
to
the
the
top
of
the
ipfs
stack
and
the
application
ecosystem
and
then,
finally,
what
we've
seen
in
q4
with
a
number
of
new
protocols
coming
out
either
building
on
ipfs
or
la
p2p
or
both
in
order
to
to
kind
of
become
you
know,
first
level
chains
or
or
other
things
like
that
cool,
so
jumping
into
things.
B
There
goes
to
groups
like
ens
and
unstoppable
domains
that
made
it
really
easy
to
upload
a
file
and
connect
it
to
a
dweb
protocol
so
that
you
could
have
you
know,
mac.ef
and
easily
resolve
that
in
the
browser
and
be
able
to
serve
a
fully
distributed
page
there
not
running
through
dns,
unstoppable
domains,
similarly
created
a
whole
set
of
templates,
where
you
could
easily
put
up
these
kind
of
the
web
websites
in
a
really
easy
way.
B
You
could
work
much
more
simply
with
the
protocol
in
a
really
nice
to
really
nice
to
manipulate
way,
and
actually
one
of
the
groups
that
came
on
the
scene
and
drove
huge
adoption
here
and,
and
I
think,
reached
a
whole
ton
of
folks
in
this
wider
ecosystem
was
fleek,
who
built
a
very,
very
smooth
deployment
platform
where
you
effectively
connect
any
any
content
address
website
on
github
and
it
will
publish
to
to
ipfs
and
just
smooth
every
every
new
pr
to
that
site
gets
auto
deployed
it.
Really.
B
B
That
can
reach
out
and
support
a
set
of
use
cases
and
developer
flows
that
just
have
been
very,
very
difficult
previously,
because
we've
been
spending
so
much
time
on
the
low-level
infrastructure
and
capability
side
of
things
that
we've
neglected,
that
end-user
development
flow
and
so
fleek
has
come
in
and
kind
of
helped
solve.
That
problem,
which
helped
catalyze
this
amazing
explosion
of
d
websites,
so
examples
of
folks
who
took
advantage
of
that
ethereum.org
went
and
put
their
site
on
ipfs
and
ens.
B
So
you
can
view
that
at
ethereum
link,
origin
created
a
whole
set
of
of
d
web
stores
where
you
can
go
and
buy
swag
from
various
different
providers.
So
brave
has
put
their
swag
store
up
on
origin
using
ipfs,
which
is
pretty
cool.
There's
also.
This
awesome
explosion
of
d5
sites
who
are
using
ipfs
to
have
a
fully
distributed
front
end
that
doesn't
rely
on
any
single
party
to
keep
that
website
up
which
really
helps
with
the
decentralization
of
sites
like
uniswap,
kyber
ether,
wallet.
B
Things
like
that,
where
you
know
again
the
the
core
app
like
back
end
of
the
application
lives
in
the
blockchain.
The
front
end
is
distributed
on
ipfs
and
you
have
no
central
party.
That's
a
bottleneck
or
fully
in
control
of
that
platform
and
kind
of
the
this
awesome
movement
of
sites
into
the
d
web
culminated
with
ipfs
actually
getting
default.
B
Ipfs
colon
support
in
opera
for
android
in
I
think
the
end
of
march,
beginning
of
main
timeframe,
and
so
that
was
super
super
exciting
to
see
that
browser
journey
and
the
set
of
sites
that
have
been
being
built
up
in
ibm's
ecosystem
become
more
accessible
on
on
various
platforms
that
people
are
using
and
wanting
to
access
these
sites
more
directly.
B
Looking
at
q2,
this
was
our
major
ipfs
release
quarter
and
so
a
lot
of
a
lot
of
time
spent.
This
quarter
was
the
upgrade
and
and
release
of
these
performance
improvements
and
then
also
making
sure
that
the
tools
we
used
in
order
to
validate
and
benchmark
and-
and
you
know,
optimize
the
improvements
we're
making.
The
protocol
also
became,
accept
accessible
to
the
wider
ecosystem,
so
kind
of.
B
In
the
april
time
frame
we
put
out
ipfs
0.0,
which
contained
a
whole
ton
of
different
performance
improvements
to
adding
data,
providing
data
to
the
network,
finding
data
within
the
network
and
then
finally
fetching
that
data
from
various
peers,
one
of
the
big
pieces
of
this
was
was
refactoring
and
cleaning
up
the
ipvs
dht,
as
the
network
had
grown
30x
in
2019.
B
We
definitely
hit
a
number
of
scaling
challenges,
and
so
this
refactor
helped
clean
out
a
number
of
nodes
within
the
network
that
were
not
doing
a
good
job
serving
queries
and
and
update
some
of
the
the
logic
within
the
dht
to
better
identify
where
content
was
and
and
search
the
network
better.
So
one
of
the
first
challenges
was:
we
had
lots
of
connectivity
gaps
where
people
were
joining.
B
The
network
were
undialable
and
then
queries
would
take
a
very
long
time
since
you'd
end
up
going
down
many
false
roads,
where
you
couldn't
end
up
dialing
the
peers
at
the
end
of
the
network,
to
get
to
the
next
step
and
so
by
actually
segmenting
the
network
into
nodes
that
would
participate
in
the
dht
being
servers
of
the
dhd
versus
consumers
of
the
dhd
or
just
clients
that
differentiated
and
we
we
automatically
were
able
to
detect
whether
or
not
you
were
dialeable
and
therefore
could
participate
as
a
good
server
of
the
network
that
that
helped
us
default
nodes
into
the
correct
configuration
for
them.
B
So
if
you're,
behind,
like
a
home
local
area
network
and
you're,
not
able
to
serve
query,
you
wouldn't
join
the
network
as
a
server
and
therefore
you
wouldn't
end
up
being
a
like
a
gap
or
a
someone.
Who's
not
able
to
effectively
forward
on
queries
to
their
correct
destination.
And
so
this
helped
us
in
many
ways
remove
a
set
of
nodes
who
were
previously
slowing
down
our
queries,
our
queries
and
and
having
dead
ends.
B
Then
the
next
thing
we
did
was
we
made
sure
for
the
good
nodes
you
were
connected
to.
You
stayed
connected
to
them,
and
so
this
was
a
lot
of
improvements
both
to
autonat
and
to
the
routing
tables
within
our
dht.
So
understanding
which
peers
are
are
valuable,
where
you
have
open
connections
and
that
you
are
prioritizing
peers
and
keeping
them
in
closer
buckets.
B
When
you
ask
them
questions
and
they
succeed
at
giving
you
what
you
wanted,
and
so
that
helped
us
avoid
removing
good
peers
that
we
wanted
to
stay
connected
to
and
avoided
a
lot
of
churn
in
your
routing
table,
which
might
then
cause
you
to
mostly
have
nodes
that
are
aren't
poorly,
aren't
well
connected
in
the
network
and
then
finally,
this
was
like
the
direct
query
logic
within
the
dht.
B
We
made
a
lot
of
updates
to
our
cademoa
implementation
that
helped
us
do
the
hops
in
the
network
where
each
time
with
our
dht
looking
a
set
closer
to
you
know
the
nodes
that
are
likely
to
have
the
content
that
you're
fetching,
and
so
we
improved
our
query
time.
We
tested
this
super
heavily.
B
This
was
you
know
this
is
a
very
core
part
of
of
whether
the
dhc's
functioning
or
not,
and
so
upgrading
this
required
a
lot
of
validation
and
simulation
to
make
sure
that
we
were
actually
improving
the
query
logic,
but
this
helped
us
vastly
improve
the
speed
at
which
we
were
able
to
find
the
content
desired
in
the
network.
B
So,
as
I
was
mentioning,
we
needed
to
do
a
lot
of
testing
and
a
lot
of
simulation
in
order
to
make
these
changes.
These
are
pretty
fundamental
changes
to
our
dht
implementation,
and
you
know
deploying
those
to
a
network
of
millions
of
end
users
and
hundreds
of
thousands
of
nodes
and
crossing
our
fingers
that
it
worked
was
not
going
to
cut
it.
B
Not
even
close,
and
many
of
these
things
we
couldn't
just
like
you
know,
on
a
local
network,
simulate
the
the
sorts
of
changes
in
a
live
context
that
we
needed
to
to
validate
that
our
improvements
were
going
to
be
useful,
so
we
actually
went
and
built
our
own
testing
framework
so
that
we
could
do
this.
B
We
built
it
in
a
kind
of
abstracted
way,
so
it
can
be
used
by
any
sort
of
peer-to-peer
network
or
protocol
in
order
to
do
distributed
simulation
of
node,
jitter
and
latency,
and
configuration
and
even
versions
version
compatibility
so
that
we
could
evaluate
this,
and
we
use
that
to
grab
a
lot
of
different
data
and
benchmarks
for
what
what
those
nodes
would
you
know
how
this
would
perform
in?
B
You
know
a
live
real
network
with
hundreds
of
thousands
of
nodes,
and
so
we
we
ran
our
new
dhd
improvements
through
this
over
and
over
again
and
allowed
us
to
optimize.
For
example,
one
of
the
really
important
and
valuable
improvements
we
got
here
was
20
to
30
x,
faster
improvement
in
providing
data
to
the
the
ibm
sdhd,
still
a
lot
of
room
to
go
here.
B
So
a
lot
more,
we
can
do
to
make
this
even
faster,
but
we,
these
sorts
of
massive
changes
when,
when
viewed
within
our
testing
framework,
were
you
know,
helped
us
validate
that
we
were
on
the
right
track.
Similarly,
when
we're
finding
providers
of
the
content
we're
looking
for,
we
were
able
to
make
a
two
to
six
x,
faster
improvement
for
fetching
data
from
ipns,
which
is
our
mutable
naming
system.
B
That's
built
on
top
of
ipfs
we're
able
to
get
approximately
like
5x,
faster
and
then
actually
for
ipns
over
pub
sub,
which
was
something
that
we
added
as
part
of
0.5
and
then
stabilized
kind
of
the
the
release
after
we
were
able
to
make
putting
and
getting
with
ips.
Also,
a
ton
ton
faster,
still
more
improvements
there
as
well,
but
able
to
like
refactor
a
system
that
you
know
had
had
some
significant
performance
hiccups
into
something
that
that
works
pretty
reliably.
B
So
in
all,
we
were
able
to
look
across
ipfs
odot
4.23,
which
is
our
release
ahead
of
ipfs
1.5,
and
this
one
we
were
able
to
see
really
really
massive
performance.
Improvements
for
finding
data
in
the
network
went
from
like
somewhere
from
8
to
42
seconds,
which
was
just
unbelievably
slow
and
very,
very
painful
to
sub
second,
in
most
cases,
and
now
we're
seeing,
I
think,
95th
percentile.
As
of
now
as
more
nodes
in
the
network
upgrade
to
the
latest
versions
of
ipfs.
This
continues
to
get
faster
and
faster.
B
So
I
think
we're
seeing
even
better
metrics
than
this,
where
95th
percentile
content
routing
is
now
below
three
seconds,
which
means
that
the
vast
majority
of
queries
in
the
network
are
completing
very
fast
and
are
able
to
yeah
again
still.
We
would
like
to
get
see
these
in
the
millisecond
time
frames.
For
you
know
many
use
cases
like
loading
pages,
but
huge
improvements
here
from
where
we
were
at
at
the
beginning
of
the
year.
B
In
addition
to
finding
content
in
the
network,
another
area
we
collaborated
with
netflix
actually
to
improve
was
transferring
data
in
the
network.
So
you
know,
depending
on
your
configuration,
you
may
as
an
ipfs
node,
be
spending
a
lot
of
time,
actually
syncing
data
between
other
nodes
or
fetching
data
from
many
nodes
at
once,
in
order
to
benefit
from
having
this
kind
of
peer-to-peer
architecture
and-
and
that
was
the
the
use
case
and
challenge
that
the
the
netflix
team
was
working
to
solve
from
kind
of
a
ci
cd
pipeline
platform.
B
And
so
we
made
a
ton
of
improvements
to
bitswap,
which
is
our
data
transfer
algorithm
in
order
to
be
more
efficient
with
many
with
fetching
content
from
many
nodes
at
once,
so
that
you
could
quickly
sync
content
from
say
a
whole
array
of
nodes
to
spin
up
like
a
new,
a
new
build
or
new
new
ci
cd
node,
and
it
helps
remove
a
lot
of
the
wasted
bandwidth
of
asking
multiple
nodes
for
a
block
and
then
getting
that
block
from
everyone
and
then
ending
up
with
a
lot
of
duplicates.
B
We
were
able
to
bring
that
way
way
down
and
also
increase
our
parallelism
and
throughput,
which
is
great
as
part
of
that
from
a
content
exchange
perspective.
B
We
also
added
some
very
early
support
for
graphsync
graphsync
is
the
the
data
transfer
algorithm
that
is,
was
developed
kind
of
in
part
thinking
directly
about
blockchains
and
how
blockchains
want
to
exchange
content
in
the
network
where
you
have
very
deep
chains
of
parents,
and
you
want
to
make
sure
that
as
you're
fetching
data
you're
able
to
kind
of
pull
each
incremental
block
not
with
a
round
trip
in
between
where
you
like.
You
know
get
the
parent
and
then
you
ask
for
its
parent
and
that's
parent.
It's
parent!
B
You
want
to
make
one
direct
query
all
the
way
through
the
history
of
the
network.
Graphsync
has
a
selector
language
that
allows
you
to
effectively
do
like
a
send
me
all
parents
of
this
node
sort
of
request
and
then
sync
all
of
those
back
and
forth,
and
so
there's
ongoing
work
in
progress
to
work
on
bringing
graph
sync
and
bitswap
closer
together.
B
But
we
added
responding
to
graphsync
requests
as
part
of
ipf's
0.5,
which
kind
of
brought
some
of
the
those
capabilities
to
ipfs
as
well
it
for
the
people
who
have
really
massive
amounts
of
data
and
just
in
general,
this
is
a
super
useful
tool
that
can
be
used
in
a
whole
ton
of
different
ways,
but
from
a
content
running
or
from
a
content
exchange
perspective.
B
Super
useful
is
the
ability
to
actually
export
whole
graphs
of
data
reliably
and
then
import
them,
and
so
this
is
actually
really
useful
in
the
file
point
case,
and
also
in
kind
of
an
offline
data
transfer
case
where
you
have
really
massive
amount
of
data
where
what
you
really
want
to
do
is
export
an
entire
graph
of
data
and
then
import
it
on
the
other
side,
instead
of
say
sending
it
over
a
wire.
B
Maybe
you
want
to,
like
you
know,
put
it
on
a
usb
and
mail
it
to
someone
that'll
honestly
be
faster,
and
so
this
was
added
also
to
kind
of
the
performance
improvements
within
ipvs.
B
We
also
added
a
number
of
important
things
from
an
ad
performance
perspective
and
from
a
hardening
perspective.
So
some
of
the
folks
here
may
have
heard
about
the
improvements
to
gossip
sub,
which
is
the
kind
of
peer-to-peer
gossip
exchange
protocol.
That's
used
by
many
blockchains
with
who
are
using
lib
p2p
as
their
peer-to-peer
networking
layer,
and
this
also
used
a
ton
of
the
the
testing
using
test
ground
in
order
to
simulate
a
number
of
attack
scenarios
and
validate
and
make
improvements
to
gossip
sub.
B
That
could
then
prove
resilient
to
those
sorts
of
attacks
and
so
worked
with
a
number
of
different
groups,
ethereum
filecoin
and
others
to
identify
the
sorts
of
attack
vectors.
We
needed
gossip
sub
to
be
resilient
to
simulated.
B
Those
attacks,
made
those
improvements
and
then
were
able
to
demonstrate
using
test
ground
that
the
gossip
sub
was
now
resilient
to
those
and
put
out
a
paper
as
well
on
kind
of
the
the
gossip
sub
v
1.1
spec,
which
is
now
what
blockchains
like
falcon
and
eth2
are
running,
on,
top
of
which
is
really
awesome,
cool.
B
Looking
at
q3,
this
was
a
quarter
of
of
standards,
and
so
we
took
a
number
of
protocols
that
had
either
been
experimental
or
had
been
kind
of
on
their
way
to
being
phased
out
and
started,
making
them
default.
So
the
quarter
of
defaults,
we
added
quick
support
by
default.
This
had
been
kind
of
experimental
in
ipfs
for
a
long
time
and
is
now
the
default.
So
you
know
the
quick
spec
had
been
evolving.
B
Finally,
things
solidified
around
draft
28
and
we
were
able
to
to
solidify
that
and
make
it
the
default,
and
it
adds
a
number
of
benefits
around
using
fewer
resources
and
therefore,
and
being
able
to
maintain
revive
connections
much
more
quickly,
which
is
really
great.
B
We
also
added
teal
three
by
default,
so
we
now
have
kind
of
a
default
standard
security,
transport,
replacing
secio,
which
was
kind
of
our
before
tls
1.3
was
solidified
kind
of
our
version
of
it,
which,
in
addition,
we
then
deprecated,
so
that
we
were
now
using
tls,
1.3
and
also
added
noise
support
so
that
we
had
cross
implementation
support,
including
in
browsers
where
kind
of
live
b2p
over
tls
isn't
isn't
available,
isn't
an
option,
and
so
now
we
have
implementations
in
noise
across
browsers
and
we
have
tls
1.3,
which
is
the
default
when
you're
working
between
say
like
go
ipfs
or
liberty,
keynotes
and
and
then
we're
able
to
deprecate
sakaio,
which
is
also
very
exciting
and
means
that
we
also,
you
know
very
much,
encourage
people
to
be
upgrading
to
these
latest
security
transports
and
to
stay
connected
to
the
newer
nodes
in
the
network
who
are
now
using
different
defaults.
B
I
think
tls
was
added
to
ipfs
nodes
back
in
early
2018,
and
so,
if
you
are
more
than
two
years
old,
definitely
make
sure
that
you
are
upgrading
on
the
network,
because
newer
nodes
will
you'll
no
longer
be
able
to
talk
to
them.
If
you
do
not
have
tls
support.
B
Another
thing
we
worked
on
standardizing
in
in
q3
was
the
unified
pinning
service
api,
there's
a
number
of
different
pinning
services
that
have
grown
up
on
ipfs,
there's
infira,
there's
pinata
and
there's
a
number
of
others,
and
we
are
working
to
bring
these
groups
together
and
create
a
unified
api
so
that
any
application
can
kind
of
have
a
seamless
interface
and
support
multiple,
pinning
services
all
at
once
using
the
same
api
and
simplifies
the
development
experience
and
makes
it
easy
to
have
kind
of
give
more
of
that
optionality
to
the
end
user
of
an
application,
and
we've
been
working
to
integrate
this
the
ipfs
web
ui,
so
that
you
can
you're
uploading
files
to
your
local
library
them
up
to
a
pinning
service
and
kind
of
that
redundancy
and
resiliency
by
default,
which
is
really
nice
and
then,
finally,
not
within
ipfs.
B
But
ipfs
and
ipns
were
both
added
as
registered
protocol
handlers
in
chromium
and
so
in.
In
many
places
where
you
have
chromium
powered
browsers,
which
is
at
many
places
at
this
point,
those
are
now
available
to
be
be
added
and
registered
there,
which
is
great
step
for
web
browser
support.
B
Speaking
of
new
friends,
we
spent
a
good
bit
of
time
documenting
some
of
the
awesome
groups
who
are
building
on
top
of
ipfs
so
that
they
could
share
kind
of
both
what
brought
them
to
ipfs
and
what
their
development
architecture
and
story
looks
like.
We
have
a
lot
of
different
people
who
are
using
and
building
on
ipvs
in
different
ways.
B
So
if
you
are
excited
about
starting
your
own
thing
on
ipfs
and
you
want
to
learn
from
the
the
motivations
and
the
the
kind
of
upgrade
path
of
other
people
building
on
the
platform,
this
is
super
useful
and
there's
a
lot
of
good
information
within
here
about
groups
like
fleeq
and
audience
and
open
bazaar
and
how
they
architected
themselves.
On
top
of
ipfs.
B
Benefiting
from
that
and
and
kind
of
in
parallel
that
to
that
as
well
falcon
ignite,
which
started
in
the
july
time
frame
and
is
ongoing
with
a
number
of
different
programs
from
get
coin
apollo
to
tachyon's,
falcon
launchpad
and
hack
fest
with
global
all
brought
a
ton
of
new
projects
and
teams
to
our
ipvs
ecosystem
and
kind
of
stack
here,
using
ipfs
using
lip22p
and
using
filecoin,
and
so
we
saw
over
a
hundred
new
projects
building
on
ibfs
just
within
q3.
B
I'm
sure
this
is
continuing
to
grow,
and
that
was
amazing
for
us,
as
both
like
a
feedback
loop
of
many
different
teams,
building
on
top
of
the
the
project
all
at
once
and
also
kind
of
finding
their
own
niches
and
being
able
to
help
and
support
each
other.
It's
been
great
to
see
these
teams
come
out,
and
many
of
them
are
kind
of
continuing
and
formalizing.
B
These
things
you
know
with
with
even
deeper
accelerators
and
becoming
like
very
stable
parts
of
the
ecosystem,
which
is
super
exciting
to
see
both
that
kind
of
top
of
funnel
adoption
and
help
us
improve
the
developer
journey,
to
make
it
easier
and
easier.
B
Over
time,
I'm
almost
out
of
time
but
q4,
probably
the
main
thing
that
many
folks
already
know
about
was
all
of
the
the
work
of
launching
the
falcon
mainnet
was
a
really
awesome
moment
for
the
entire
interplanetary
stack
and
all
of
the
protocols
that
are
used
within
filecoin
in
order
to
kind
of
make
that
network
successful
and
also
for
many
people
in
the
falkland
ecosystem,
who
are
also
kind
of
making
development
kind
of
within
the
ipf's
ecosystem,
utilizing
falcoin
for
longer
term
persistence
and
storage.
B
Many
of
those
groups
also
launched
improvements
and
tooling.
That
makes
that
easy
from
a
development
perspective.
If
you
want
to
hear
about
more
about
the
ways
we
contributed
to
or
how
we
see,
this
evolving
really
recommend
watching
the
talks
from
falcoid
liftoff
week
on
the
falcon
youtube
channel
they're
phenomenal.
B
We
also
saw
a
couple
of
other
groups
launch
awesome
protocols
on
top
of
ipfs
audies,
launched
in
the
october
time
frame
with
over
one
million
monthly
active
streams
and
eth2
actually
just
launched
yesterday,
which
is
awesome
and
they're
using
lib
p2p
and
actually
have
funded
and
helped
create
a
number
of
different
laptop
implementations.
B
And
there
is
now
four
client
implementations
interoperating
on
on
the
beacon
chain,
so
huge.
Thank
you
to
these
teams
for
like
helping
push
libby
to
be
forward
and
making
it
better
super
exciting
cool,
and
my
call
to
action
for
you
in
my
last
minute
is:
we
are
actively
working
on
ipvs
2021
planning,
there's
a
lot
of
stuff
happening
in
the
ecosystem
and
a
lot
of
ideas
on
how
we
can
continue
to
make
it
better
and
where
we
should
focus
our
time.
B
As
you
know,
both
some
of
the
core
working
groups
and
then
also
this
entire
ecosystem
of
awesome
projects
who
are
trying
to
work
within
the
ipfs
ecosystem
and
make
it
better
and
so
15
proposals
already
awesome
conversations
happening.
Please
jump
in.
If
you
want
to
talk
about
ipves,
1.0
or
content,
permissioning
or
new
dht
improvements
come
join
the
conversation
we
would
love
to
hear
your
thoughts
cool
well.
B
A
A
Have
two
questions
in
our
q
a
for
you
first
question:
what's
the
plans
for
merging
ipfs
and
filecoin
ecosystems.
B
Yeah,
I
would
say
a
lot
of
this
is
organic,
like
it's
just
honestly
happening
on
its
own
accord
and
it
started.
You
know.
Over
a
year
ago,
lots
of
groups
within
the
ipvs
ecosystem
were
already
looking
to
things
like
falcoin,
or
were
attracted
as
that
as
an
upgrade
path
for
them
for
keeping
data
storage
around
inaccessible,
a
number
of
groups
who've
been
building,
developer,
tooling
and
apis
system.
B
Infuria,
textile
fleeq
others
have
already
dive
deep
into
filecoin
and
are
some
on
how
to
configure
use
it
from
a
developer,
and
so
it
this
is
well
underway.
Definitely
go!
Look
at
some
of
the
videos.
These
these
groups
are
highly
overlapped
and
in
many
ways
using
falcoin
is
using
an
ipfs
node.
It
runs
on
top
of
the
same
protocols.
It
uses
little
b2p,
it
has
ipld
data
to
add,
add
new
data
to
the
falcon
network.
You
first
turn
it
into
ipfs
shape
data
and
send
it
over
liberty
to
people.
B
So
in
many
ways
these
are
already
very
shared
ecosystems.
Obviously,
like
not
all
ipvs
use,
cases
are
going
to
utilize
falcoin,
which
is
awesome
and
perfectly
fine.
But,
like
we
see
these
as
very
overlapped
and
complementary.
A
Awesome
apart
from
fleeq,
what
is
your
personally
favorite
project
using
ipfs.
B
I
spent
a
while
thinking
about
this
last
time
and
I
I
avoid
picking
favorites
in
the
ipfs
ecosystem,
because
I
love
all
of
them
and
they're
doing
really
really
amazing
work.
The
one
that
I'll
I'll
actually
shout
out
to
which
is
not
not
technically
an
ipfs
ecosystem
is
matrix.
I
just
I
guess
now
called
element.
B
I
love
their
team
they're
just
phenomenal.
They
think
really
thoughtfully
about
kind
of
the
the
content,
moderation
and
agency
of
of
nodes
in
the
network
and
they've
actually
started
building
a
p2p,
matrix
or
p2p
element
on
top
of
libpdp,
which
is
super
cool,
and
I'm
really
excited
to
see
where
that
one
goes.
A
Awesome.
Thank
you
so
much
for
your
time
today
greatly
appreciate
it.