►
From YouTube: How We’ve Scaled Pinata with Matt Ober
Description
Pinata CTO Matt Ober invited us to a behind the scenes look at Pinata’s history and progress, especially when it comes to scaling issues, in this presentation from the IPFS June Meetup. Learn more about Pinata at https://pinata.cloud/.
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Join your local IPFS meetup to attend our next event: https://www.meetup.com/pro/ipfs/
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
My
name
is
matt
ober,
I
am
the
cto
and
co-founder
of
pinata.
We
are
an
ipfs
tool.
Suite
painting
service
been
around
for
almost
three
years.
Now
I
think,
and
our
like
michael
said,
our
our
kind
of
catchphrase
is.
We
are
the
easiest
way
to
use
ipfs,
so
everything
we
do
is
kind
of
geared
towards
just
making
your
experience
as
a
developer
hassle
free.
A
So
today
I
wanted
to
start
have
a
talk
on.
You
know
how
we
scaled
from
a
very
small,
what
I
would
call
an
experimental,
hackathon
tool
to
the
global,
pinning
ipfs
infrastructural
service
that
we
are
today.
It's
kind
of
a
fun
little
journey
and
I
figured
it
might
be
interesting
for
some
of
the
people
in
the
space
all
right
so,
like
I
said
we
started
in
eath
berlin.
We
were
in
2018
a
proof
of
concept.
A
We
were
offering
things
up
as
a
hackathon
tool
and
we
provided
pretty
pretty
minimal
functionality,
upload
a
file
retrieve
it
from
a
gateway
yeah.
That
was
basically
it.
So,
during
eth
berlin,
we
went
around
spreading
the
word
about
pinata.
We
wanted
to
get
feedback,
see
how
people
were
using
our
system
and
then
under
the
hood,
we
were
running
a
super
super
simple
setup,
basically
a
monolith,
as
a
lot
of
you
might
know
this.
A
As
so,
we
were
running
everything
on
a
singular
machine
in
the
cloud,
so
one
machine
hosted
our
api
and
then
our
one
ipfs
node,
which
was
receiving
files
yeah
this.
This
also
out
acted
as
a
gateway
serving
content
through
the
ipfs
network
and
then
through
our
gateway.pinata.cloud
url
yeah.
So
this
is
kind
of
what
it
looks
like
everything
again
is
running
on
one
machine,
api,
node,
storing
content
and
then
distributing
it
around
through
the
network
and
the
gateway.
A
Shortly
after
we
launched,
though,
as
an
actual
product,
we
started
to
gain
traction
amongst
the
web3
community
and
with
that
traction
came
a
lot
higher
levels
of
usage.
So
we
click
quickly
recognized
that,
in
order
for
pinata
to
succeed,
we
were
going
to
take
what
we
started
off
with
as
kind
of
an
experimental
tool
and
turn
it
into
something
that
was
actually
scalable.
A
So
it
was
at
this
point.
We
actually
decided
to
take
pinata
from
an
experimental
side
project
to
something
which
we
would
begin
working
on
full-time
and
then
from
here.
We
we
had
some
challenges
that
we
faced.
You
know
scale
in
from
this
proof
of
concept
to
an
actual
product,
and
I
want
to
talk
about
some
of
the
challenges
we
faced
and
then
how
we
overcame
them
as
an
engineering
team.
A
We
were
again
running
everything
on
a
singular
machine
and
during
times
of
high
traffic,
this
meant
that
things
were
getting
a
little
resource
constrained
and
you
know
starting
to
put
out
fires
trying
to
handle
that
traffic
scaling
the
nodes
up.
A
A
This.
This
helped
with
resources
for
a
while,
but
the
the
second
challenge
we
ran
into
was
going
to
be
downtime
when
you're
running
a
product
that
other
products
are
going
to
be
building.
On
top
of,
you
need
to
make
sure
that
everything
you're
doing
is
geared
towards
eliminating
downtime
from
your
infrastructure.
A
A
lot
of
this
stuff
is
may
seem
kind
of
obvious,
and
a
lot
of
this
stuff
is
actually
kind
of
solved
for
by
modern
cloud
providers,
but
starting
up
in
the
ipfs
space.
You
know
there
is
no
s3
for
ipfs,
there's
no
kind
of
managed
database
services,
we're
kind
of
building
everything
from
scratch.
So
doing
all
this
manually
was
kind
of
a
fun
process
that
we
we
went
through
as
an
engineering
team.
A
A
So
we
were
seeing
requests
come
from
everywhere
in
the
world:
united
states,
south
america,
africa,
asia,
india,
australia,
europe,
like
all
of
these
all
of
these
continents-
and
you
know,
locations
around
the
world
were
all
trying
to
interact
with
one
region
that
we
had
in
europe,
which
was
quickly
kind
of
not
being
as
quick
as
we
would
have
liked
for
everybody
else
in
the
world.
The
users
in
say
australia,
for
example,
were
receiving
pretty
long
upload
times.
A
So
we
took
that
as
our
next
upgrade
priority,
because
we
wanted
to
make
sure
that
everybody
around
the
world
was
receiving.
You
know
equally
as
fast
upload
times,
so
our
solution
to
this.
Similarly,
to
how
we
scaled
out
in
one
region
with
multiple
ipfs
nodes,
we
then
scaled
out
to
multiple
global
locations.
A
So
we
rolled
this
out,
I
think
around
march
of
2020
and
then
since
then,
we've
been
continually
expanding
into
more
and
more
locations
and
as
of
right
now,
we
should
be
serving
content
in
every
major
continent
of
the
world
through
our
network
of
host
nodes
and
gateways
and
then
yeah.
So,
with
this
up
this
setup,
we
now
have.
A
A
Once
once,
we've
chosen
that
node,
then
the
content
is
stored
on
that
node
for
long
term,
storage
and
future
retrieval
on
the
network
and
speaking
of
retrieval,
retrieval
looks
pretty
similar
on
our
end.
But
a
little
slight
difference
here
is
that
when
the
user
request
comes
in
the
gateway,
then
is
going
to
fetch
data
from
the
closest
host
node.
A
So
it's
kind
of
a
quick
overview
of
what
our
our
network
looks
like
how
we've
scaled
up
to
where
we
are
today.
What's
next
on,
our
agenda
is
going
to
be
a
focus
on
things
like
content
discovery,
even
faster
content,
uploads
downloads
things
like
making
sure
that
we're
gonna
constantly
be
increasing
our
rate
limits,
so
you
guys
can
upload
more
and
more
content
all
the
time
and
then,
lastly,
we
want
to
hear
from
you
guys
as
well.