►
From YouTube: Project S4 - Anthony Budd
Description
Creator of Ideea, a platform of online services using alt-tech infrastructure, Anthony Budd, joined us to talk about S4, which is S3 compatible storage, accessed through Tor and distributed using IPFS.
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
So
hi
everybody,
my
name,
is
anthony
bird
developer
of
s4
I've
a
little
bit
about
me.
I've
not
been
doing
web
development.
Really.
Since
I
left
high
school,
I've
been
working,
doing
php
node.js
a
live
video
contributor.
I've
also
done
a
fair
bit
of
wordpress
contributing
throughout
my
career,
but
I
tend
not
to
talk
about
that
too
much.
I'm
an
ex-apple
developer
and
I've
also
developed
a
project
called
vipfs,
which
you
may
or
may
not
have
heard
of.
A
A
So
that's
kind
of
a
bit
of
a
challenge
and
I
run
a
delivery
company
called
teleport
I'll
quickly
touch
on
that
right
now,
because
teleport
is
really
what
funded
s4.
So,
basically
I
run
a
delivery
company
in
austin
texas
and
we,
as
you
can
see,
we've
got
a
handful
of
quite
have
quite
large
clients
and
basically,
all
the
kind
of
income
I
generate
of
this.
This
is
mine.
Like
495,
it's
my
thing,
that's
what
I
do
every
single
day.
A
This
is
really
what
allows
me
to
kind
of
make
projects
like
s4
and
vip.
First,
it's
a
little
plug
there,
so
great.
So
what
is
s4
s4
is
designed
to
be
100,
compatible,
s3
storage,
access
to
tor
and
distributed
using
ipfs,
it's
100,
yeah
compatible
with
s3
and
all
their
existing
sdks
and
whatnot.
It's
decentralized
anonymous
and
it's
obviously
permanent
because
ipfs.
A
A
A
A
Then,
when
you
write
data
into
the
server
you
have,
you
can
create
buckets
just
like
in
s3
each
of
your
buckets
get
paired
off
with
a
key
in
ipfs,
and
every
time
you
update
the
data
in
the
buckets
the
bucket
will
get
published
using
that
key.
What
this
means
is
that
you
can
create.
You
can
address
any
of
the
content
stored
on
your
s4
server,
using
this
kind
of
url
here,
where
it
should
be
the
key
of
the
bucket,
and
then
this
would
be
the
path
of
the
file
that
you
stored
in
the
bucket.
A
So
I
guess
a
question.
A
lot
of
you
are
asking
is
a
bit
of
a
quite
difficult
or
complicated
to
communicate
with
this
s4
server
from
yeah
all
these
kind
of
promo
languages.
So
how
do
we
achieve
this?
It's
achieved
with
a
sidecar
container,
so
this
slide
that
you're
seeing
here
represents
an
example
application
using
s4.
A
So
if
we
had
an
application,
I've
got
a
node
example.
Here,
I've
titled
it
my
app.
We
have
a
node
application
and
we
want
to
store
the
data
we
got
to
store,
say
images
on
s4
instead
of
s3.
How
do
we
do
that?
A
So
we
run
our
two
containers,
our
node.js
container
here
and
our
s4
client,
which
acts
as
a
sidecar
container
the
code
inside
the
the
code
inside
the
node
app
you
must.
You
should
surely
be
able
to
recognize
a
lot
of
this
code
here
we
import
the
official
amazon,
aws
sdk
and
we
even
instantiate
the
s3
class
of
that
but
notice
here.
A
The
end
point
is
pointing
to
our
local
s4
client,
which
is
this
container
here
you
can
see
and
we
also
brought
it
with
the
access
key
and
secret
secret
access
key
as
well,
and
we
can
just
we
can
use
all
of
the
existing
aws
methods
that
the
sdk
provides
us
completely.
There's
no
other
modification
has
to
be
used,
but
when
I
put
this
object
into
the
bucket,
it
will
go
over
tor
into
s4
and
then
be
published
to
ipfs.
A
So
then,
onto
reading
data,
how
do
we
read
the
data
from
from
s4?
Again
I
had
a
few
specifications
on
how
I
wanted
to
make
this
work,
namely
the
fact
that
I
didn't
want
to
have
didn't
want
to
require
use
javascript.
The
beauty
of
s4
is
that
the
data
gets
requested
over
http
gateways,
meaning
that,
if
you
don't
want
to
use
javascript-
and
you
just
want
to
have
a
plain
image
tag-
you
can
have
a
plain
image
tag
and
that
that
image
will
be
loaded
over
s4
using
a
public
gateway.
A
This
also
allows
for
human
readable
file
paths
this
bucket.
This
is
this
hash,
only
references
the
bucket.
Therefore,
all
the
content
is
human
readable.
As
you
see
here
so
I've
said
a
lot
there.
Let's
try
do
a
live
demo,
where
we're
gonna
actually
create
an
s4
server
if
anyone's
interested
in
following
along
with
this,
you
can
go
to
this
url
right
here
idea:
inc
forward,
slash
s4
and
you
can
follow
along.
It
only
requires
four
commands
to
actually
start
up
the
server,
so,
let's
give
it
a
go.
A
A
And
we'll
cd
into
the
s4
repository
and
once
you're
in
the
repository
first
thing
we
have
to
do,
is
we
have
to
make
a
onion
address
for
our
server?
We
have
to
have
a
private
key,
so
we
can
do
that
using
this
command.
Here
again,
this
is
all
provided
in
the
readme
of
the
repo,
so
it's
pretty
much
a
copy
and
pasteable
if
we
run
that
command.
What
happens
great?
It's
generated
us
this
onion
address
here
now
this
onion
address
is
referencing
or
this.
A
A
This
might
take
a
second,
so
what
we'll
do
is
we'll
continue
over
the
rest
of
the
presentation.
So,
let's
so
now
we
have
this
s4
server
set
up
and
or
right.
Now
it's
running
on
my
on
my
laptop.
We
can
go
ahead
and
assume
that
this
is
say
some
server
which
is
somewhere
on
the
planet.
A
Obviously
the
actual
ip
address
is
unknown
because
it's
using
tor.
So
how
would
our
application
actually
write
data
to
that
so
I'll
give
a
quick
example.
A
First,
let's
nano
we
have
to
quickly
update
the
env
and
give.
A
Good,
so
that
now
is
booting
up
what
would
be,
which
would
be
like
an
application
which
might
write
ss
for
in
production.
A
So,
let's
actually
test
and
see
if
our,
if
we
can
now
we're
going
to
run
this
block
of
code
here,
which
hopefully
we'll
be
able
to
create
us
a
bucket
on
our
server
and
upload
a
file
into
that
server,
so
I'll
run
the
command.
A
Here
again,
all
of
these
commands
are
in
the
readme
of
the
of
the
repository
and
looks
like
we've
successfully
created
our
pocket
and
we've
accessibly
uploaded
a
file
into
that
bucket.
A
A
Here
we
go
so
minio
is
like
an
open
source
version
of
aws
s3.
It's
all
the
api
methods
are
compatible
with
s3,
so
I'm
just
going
to
try
and
log
into
it
now
and
hopefully
we
have
to
find
our
file
in
there
and
then
I
also
got
to
show
you
how
you
can
actually
access
read
that
file.
Sorry
over
ipfs,
so
wait.
We've
logged.
A
A
A
A
There
we
go
so
what
I'm
doing
now
is
I'm
doing
the
equivalent
of
logging
into
the
f4
server
and,
as
you
can
see
here,
this
is
the
example
file
that
I've
just
uploaded
through
using
the
example
using
the
example
code.
The
obvious
question
is:
how
would
I
then
access
this
file
over
ipfs
now,
there's
a
drip
there's
a
bucket
which
isn't
published
ipfs
called
system
and
inside
the
system
directory
all
of
the
hashes
to
those
keys
are
accessible
here
in
this
file.
A
Ipns,
this
won't
load
straight
away,
obviously
because
the
because
it's
going
to
take
a
moment
to
actually
find
that
bucket.
But
what
happened
is
once
after
that
buckets
are
propagated
over
the
ipfs
network,
you'll
be
able
to
navigate
to
that
url
and
also
go
to
navigate
to
the
forward,
slash
exam
pull.ppf,
and
that
will
that
will
then
load
that
file,
then
over
the
ipfs
network.
A
What
this
allows
is
allows
us
to
store
data
on
ipfs
without
requiring
developers
to
actually
change
their
code.
I
think
there's
a
lot
of
great
products
in
the
ips
community
right
now,
but
I
don't
think
it's
massively
realistic
to
expect
large
large
corporate
companies
and
development
teams
to
change
to
change
their
to
change
their
code
to
kind
of
fit
with
ipfs.
A
It
makes
far
more
sense
that
we
we
make
ipfs
and
augment
any
of
those
changes
that
are
necessary
and
that's
what
I've
managed
to
do
here
using
this
kind
of
sidecar
container
system.
This
allows
developers
to
keep
their
existing
code,
but
then
write
the
data
to
ipfs
under
the
hood,
with
actually
with
no
changes
whatsoever
required
to
their
code.