►
From YouTube: IPFS All Hands November 26, 2018 📞🙌🏽
Description
IPFS Newsletter: https://tinyletter.com/ipfsnewsletter
A
B
A
A
B
B
B
Yay
first
problem
solved,
so
once
again
we
will
start
with
announcements,
then
we
will
get
into
the
presentation
and
finally,
we
will
have
five
or
ten
minutes
for
questions
and
answers
for
questions
and
answers.
Could
you
please
put
excuse
me
a
question
and
say
comments
if
you
could
put
your
questions
and
comments
in
a
chat
and
I
will
call
you,
based
on
your
name
and
a
chat?
They'll
be
great
I'll,
make
things
a
little
bit
more
organized
and
let's
begin
with
announcements.
B
B
Okay,
Michael
Mike.
Please
give
your
announcement.
C
D
Hey,
sorry,
can
people
see
my
screen
right
now?
Yes,.
B
D
I
apologize
for
the
delay
there
zoom
was
give
me
some
trouble,
so
just
yeah
I
wanted
to
do
a
really
brief
announcement,
but
it's
it's
extensive
enough
that
it
has
a
few
slides.
D
My
name
is
Mike
gelser
I'm,
the
working
group
captain
for
lid
p2p
like
the
overall
lid
p2p
project,
and
we
have
put
together
we're
in
the
process
of
creating
a
roadmap,
just
like
you
guys,
are
you're
much
further
ahead
of
the
state
that
we're
at
currently
but
we've
put
together
our
first
sort
of
major
draft,
where
we
want
feedback
from
external
stakeholders
or
I
mean
just
stakeholders
generally
and
in
particular,
ipfs
working
group
captains
are
members
who,
whose
work
depends
on
Lib,
p2p
I'll
post.
D
That
link
in
the
document
link
in
the
in
the
notes
if
I
haven't
already,
but
basically
we
we
just
put
this
document
together.
I'm
not
gonna,
try
to
go
through
the
whole
thing.
I
just
want
to
explain
this
for
the
state
that
it's
in
and
then
how
you
can
help
us
right
now.
This
document
is
just
an
aggregation
of
every
idea.
That's
been
discussed
by
the
Lib
p2p
team
for
what
we
could
do
in
the
next
like
five
years.
D
It's
they
range
from
tactical
things
like
improving
the
docs
to
really
far
out
their
ideas
like
making
a
lip,
p2p
hardware
device
and
that
people
install
in
their
homes,
so
I
mean
it's.
It's
just
a
huge
compilation
of
these
ideas.
It's
not
a
road
map
in
the
sense
that
there
are
no
dates.
There's
no.
The
items
are
not
in
any
type
of
priority
order
that
does
intensional
and
we're
not
committed
at
this
point
to
any
of
those
things.
They're,
just
a
role
and
I
enrolled
did
most
of
the
technical
parts.
D
You
should
tell
us
like
make
a
comment
in
the
dock.
Yeah
I'll
create
a
github
issue
too,
but
probably
best
to
the
comments
in
the
dock.
If
you
think
that
that,
if
you
see
something
in
the
document
that
is
particularly
important
to
you,
you
should
+1
and
it
like
that'll
help
us
know
about
dependencies
that
we
may
not
have
already
learned
about.
Ok,
so
I'll
stop.
My
screen
share
thanks
for
your
time.
B
Thanks
any
quick
questions
before
we
continue.
B
C
Okay,
so
as
I
stick,
as
you
stated,
this
is
the
ipfs
live
streaming,
which
is
a
something
we
launched
at
the
eternal
our
networks
conference
in
2018.
So
our
networks
is
a
fairly
small
conference.
It
just
started
up
in
Toronto.
This
is
the
second
year.
It's
got
only
about
a
hundred
participants
on
it.
C
It's
basically
talking
about
community
networks,
mesh
networks,
things
of
that
nature,
focusing
more
on
the
people
side.
Although
there's
quite
a
bit
of
tech
in
there
as
well,
so
the
live-streaming
the
conference,
our
budget
was
basically
a
hundred
bucks.
It
wasn't
something
that
was
originally
planned
on
happening,
but.
C
Basically,
myself
Ilan
we're
kind
of
talking
about
how
we
can
do
video
streaming
over
ipfs
and
some
optimistic
person
said
hey.
Why
don't?
We
live
streamed
the
conference
over
ipfs
and
we
aimed
at
making
the
content
distribution
easily
scalable
make
it
reprimand
other
similar
sized
conferences
and
dog
food
for
decentralized
text,
basically,
testing
the
limits
see
what
we
can
do
see
what
we
can't
do
things
of
that
nature,
so
the
project
started.
The
first
commit
was
May
19th.
The
conference
was
July
13th,
looking
back
at
it.
That
was
two
months
somehow
we
pulled
her
off.
C
Although
we
had
a
proof
of
concept
already
in
place,
it
was
still
quite
they
asked,
but
nonetheless
we
did
do
it.
So
before
we
start
I'm
gonna
talk
about
a
stack
sorry,
so
our
stack
at
the
our
networks
conference
was
basically
a
couple
of
HDMI
capture
cards.
In
some
audio-visual
hardware,
we
had
used
the
OBS
studio
to
manage
the
actual
streams
and
such
on
premise.
C
We
used
Open
VPN
for
authentication
and
publishing
digitalocean
was
the
host
and
manage
DNS
is,
and
all
that
nginx
and
rtmp
modules
were
used
to
proxy
the
traffic
to
the
real-time
servers.
Ffmpeg
they're
all
are
encoding
into
HLS
ipfs
did
the
storage
and
distribution
video
j/s
we
used
as
the
player
for
the
website
and
terraform
was
a
means
that
we
used
to
make
it
reproducible
and
easily
deployable.
So
with
that,
I
am
going
to
show
you
a
quick
demo
of
how
to
actually
deploy
this.
C
So
we're
gonna
start
by
cloning.
The
git
repository
where
all
this
is
stored
for
anybody's
use
and
then
we're
gonna
do
a
bit
of
configuration
so
we're
gonna
set
a
domain
name
where
we're
gonna
put
this
now
we're
going
to
host
this
we're
gonna
set
an
API
email
address
for
let's
encrypt
and
we're
going
to
generate
a
public
key
for
the
SSH
sessions
and
that
key
that
gets
put
into
digitalocean
and
a
fingerprint
gets
added
and
speaking
of
the
gel
oceans.
C
So
this
just
prepares,
terraform
and
then
terraform
apply
will
actually
take
the
whole
stack
and
deploy
it
on
my
own
serve
digital
ocean
droplets.
So
this
takes
anywhere
from
ten
to
forty
minutes.
As
of
last
night.
We
got
it
down
to
ten
minutes.
We
found
what
was
happening,
but
we're
gonna,
let
this
run
while
we
continue
the
presentation
and
check
it
out
after
so,
let's
talk
a
little
bit
about
what
we
did
at
our
network,
so
this
is
kind
of
the
map
of
how
the
our
network
streaming
was
taken.
Care
of.
C
We
basically
had
two
feeds
an
HDMI
feed
from
the
presenters
laptop
and
from
the
cameras
they
would
go
into
a
laptop
running.
Obs
OBS
would
then
publish
the
stream
over
a
VPN
connection
over
the
Internet
to
the
rtmp
server,
which
is
one
of
the
droplets
that
the
stack
creates,
because
we
did
not
know
if
this
would
work
that
rtmp
server
actually
hosts
the
files
for
two
different
means
of
streaming:
the
ipfs
stream
and
a
pure
clear
legacy,
HTTP
stream.
So
we,
if
one
would
fail,
we
still
had
the
old,
the
other
one.
C
C
So
we
were
looking.
We
looked
at
something
called
HLS,
which
a
lot
of
websites
have
been
using
for
various
reasons.
So
HLS
was
developed
by
Apple.
It
was
released
in
2009.
It
basically
breed
breaks,
the
stream
into
small
pieces,
little
small
chunks
and
this
sequence
of
the
chunks
actually
make
up
the
stream.
What
HLS
does
it
creates
a
playlist
that
actually
describes
what
the
sequence
is.
So
as
an
example
there's
an
m3u8
list
of
the
different
chunks
of
the
client
downloads,
then
the
client
start
downloading
each
chunk
individually
and
playing
it
as
it
goes.
C
So
the
question
we
came
up
is
what,
if
we
just
hash
each
individual
chunk
and
that
that's
what
we
did
so
the
video
source
was
from
an
rtmp
server.
We
used
ffmpeg
to
actually
create
the
HLS
chunks,
the
small
pieces
of
video,
and
then
we
just
wait
until
the
chunk
is
created.
Once
the
chunk
is
created.
We
added
to
the
ipfs
cloud
and
we
store
that
new
hash
somewhere.
So
we
can
rewrite
the
m3u8
list
and
then
just
rinse
and
repeat
so
we
created
a
log
file
that
looks
kind
of
like
this.
C
C
C
C
Well?
They
say
there
are
only
two
hard
things
in
computer
science,
caption
validation,
naming
things
as
soon
can
just
call
backs
and
off-by-one
errors.
We
hit
them
all
cache
invalidation,
trying
to
figure
out
why
the
stream
was
stalling.
Well,
IP
n
s
takes
two
minutes
to
publish
the
stream.
That's
four
times
slower.
Assuming
you
have
a
30.
Second
chunk,
that's
not
going
to
work!
C
Ip
NS
then
takes
about
two
minutes
to
resolve
which
again
it's
not
gonna
work,
so
I
penis
with
DHT
takes
way
too
long,
because
what
we
were
told
there's
way
too
many
people
behind
the
NAT,
and
it's
basically
luckily
draw
if
you're
gonna
get
a
fast
resolved
or
not.
So
he
looked
at
IP
and
s
pub/sub,
which
publishes
things
very
very
quickly
and
it
works
great
most
of
the
time.
C
So
what
was
I
solution?
Well,
because
we
wanted
this
to
work,
we
because
we
wanted
this
to
be
a
proof
of
concept.
We
scrapped
the
IP
and
s
for
the
conference
and
just
hosted
the
small
little
six
seven
hundred
byte
file
on
HTTP.
It
wasn't
gonna
break
things
well,
naming
things
well
if
ffmpeg
was
restarted,
it
starts
a
numerating
from
the
beginning
and
duplicating
names
in
the
log
file.
So
when
you
were
trying
to
replace
them,
the
hashes
don't
match
up
and
everything
just
breaks.
C
C
C
So
our
solution
was
creating
self
recovery
measures
that
realize
that
something
is
not
working
properly.
Every
time
it
loops
just
to
thing,
get
things
up
and
running
so
off-by-one
errors,
HLS
sequence,
have
a
timecode
in
them.
If
they're
incorrectly
ordered
the
client
stalls,
rewriting
them
with
a
timecode
led
to
very
limited
success,
but
when
we
looked
at
this
problem,
we
found
that
HLS
had
a
better
solution.
There
is
actually
a
tag
in
HLS
that
indicates
the
beginning
of
a
new
sequence
and
we
applied
that
and
things
started
working.
C
C
Config
files
that
you
put
on
your
computer,
you
connect
the
VPN
and
we
use
that
as
an
authentication
mechanism.
The
nice
thing
about
Open
VPN
also,
is
that
it
wraps
it
in
a
TCP
link.
So
the
UDP
packets
don't
drop,
usually
live.
Streams
are
using
UDP
packets
because
you
don't
want
to.
If
you
lost
some
packets,
who
cares
you
want
to
continue,
but
because
there's
a
bit
of
a
delay,
wrapping
in
a
TCP
actually
creates
a
better
stream
without
so
much
delay.
So
I'm
gonna
stop
sharing
now
and
Ilana.
C
If
you
want
to
share
your
screen,
let's,
let's
stream
something.
C
C
C
C
C
C
So
there
is
a
little
bit
of
a
delay
on
on
this
model
because,
as
I
said,
you
can't
hash
live
events
so
between
the
waiting
for
the
chunk
to
get
created
it
getting
published
on
ipfs,
followed
by
actually
the
H
last
client
downloading,
the
proper
chunks
and
and
so
forth.
There
is
a
little
bit
of
a
delay
on
this,
but
when
we
streamed
it
at
the
conference
we
had
about
16
people
view
it
and
the
stream
worked
great
then
go
down
once
I
think
for
about
half
a
second
and
we
had
some
really
good.
C
I'll
show
you
this
really.
This
is
a
what
we
did.
We
took
the
stack
and
we
compressed
it
onto
a
Raspberry
Pi
using
the
Raspberry
Pi
camera.
So
I'm
gonna
share
my
screen
here
for
a
second
and
grab
some
links
here.
Where
did
my
links?
Go
so
I'm
gonna
go
directly
to
local
IP
addresses
just
because
it
works
better.
C
So
this
is
a
Raspberry
Pi
camera
pointed
out
on
to
outside
a
window
and,
as
you
can
see,
it
streams
pretty
well
on
local
IP
addresses
and
if
we
wanted
to
there's
no
ports
forwards.
There's
nothing
like
that.
This
is
playing
off
of
a
third
a
third
party
VMware
with
with
the
same
player
as
you
can
see,
it's
it
plays
and
then
another
little
thing
we
did,
because
this
is
the
Raspberry
Pi
camera
hooked
up
to
the
Raspberry
Pi,
and
so
we
were
talking.
What
else
could
we
do
with
this?
C
What
could
be
some
cool
things?
So
what
we
did
is,
let's
see
if
this
thing
loads,
we
took
a
Raspberry
Pi.
We
connected
it
to
a
software-defined
radio
tuned
it
into
an
FM
frequency,
and
now
we
have
internet
radio
over
IP
FS,
so
I
will
throw
these
into
the
chat.
I
can
figure
out
how,
where
my
chats
and
for
you
guys
to
take
a
look
at-
and
this
is
just
running-
you'll-
see
that
the
streams
will
stall
and
things
of
that
nature.
A
C
It
worked
really
great
all
the
problems
that
we
were
actually
had
were
actually
physical
issues
at
the
location
we
were
at,
because
we
were
at
the
Mozilla
hive
and
we
had
to
use
a
bit
of
hackery
to
actually
get
it
into
our
laptops.
So
we
had
some
audio
sync
issues
stuff
like
that,
but
none
of
it
was
because
of
this
stack.
It
was
because
of
just
very
short
amount
time
we
had
to
prep
it
then,
but
yeah.
B
E
Suppose
just
is
there
a
threshold
as
far
as
the
number
of
people
can
who
can
hit
it,
I
guess
since
you're,
using
the
gateway
that
shouldn't
be
a
problem
so
like
it
did
you
a
dude
named
load-testing.
C
C
However,
we've
done
some
playing
around
why,
especially
with
ipfs,
using
the
the
module
of
the
add-in
for
Chrome
and
Firefox
and
in
theory,
what
we
should
be
able
to
do
is
actually
spread
that
content
around
a
lot
faster
if
everybody
was
using
the
module,
so
I
mean
in
theory,
if
everything
works,
as
planned,
I
mean
scalability
shouldn't
be
an
issue
at
all.
If
things
work,
oh.
B
C
So
in
the
internal
repository
for
a
tear
from
scripts
there's
a
variable
that
controls
how
many
mirrors
you
can
spin
up.
So
if
we're
expecting
like
a
thousand
people
watch
it,
we
can
from
the
beginning
spin
spin
up
like
for
chocolates
for
for
mirrors
and
each
mirror
also
holds
its
own
gateway.
But
then
the
more
mirrors
we
have
the
more
I
PFS
knows
there
are,
and
then
it
would
do
the
distribution
of
the
chumps
through
through
ipfs.
F
F
F
C
So
HLS
has
that's
one
of
the
powers
of
HLS
is
that
you
can
do
multiple
resolutions
and
then
it
decides
that
you
know
what
I'm
lagging
behind.
Let
me
drop
to
us
lower
resolution,
so
that
there's
because
it
assumes
just
network
congestion.
So
yes,
you
can't
do
that.
It's
just
part
of
HLS,
but
one
of
the
issues
that
you
would
probably
run
into
now.
We
didn't
do
this,
but
just
looking
at
it
is
first
of
all,
you
would
be
distributing
multiple
different
streams.
C
So
if
you
are
looking
at
pushing
out
the
stream
to
multiple
to
a
large
audience,
you
would
have
to
push
out
actually
multiple
different
streams.
They'll
be
if
you
would
actually
be
splitting
off
how
many
nodes
have
the
stream
that
you're
actually
watching.
But
the
other
issue
that
you
would
also
run
into
is
that
the
lag
in
HLS
would
not
necessarily
be
a
a
bandwidth
issue
anymore,
because
there's
so
many
other
factors
there's
has
IP
NS
updated
properly,
has
does
the
ipfs
have
the
hash
that
you're
looking
for,
etc,
etc.
C
F
Yeah,
absolutely
in
perhaps
like
one
potential
use
case
would
be
that,
like
even
the
receivers
are
the
one
doing
doing
the
transcoding.
So
if
I
have
a
laptop
in
my
home
and
I'm
watching
some
streaming
and
I
have
other
devices
in
my
home
that
I
also
like
trying
to
watch
the
same
stream.
I
can
be
the
machine
that
is
responsible
for
like
doing
the
the
re-encoding
and
then
because,
like
it's
missing
again
so
like
the
devices
can
just
like
fetch.
C
F
C
An
idea
just
a
lot
of
them.
We
tried
yeah.
There
was
an
issue
with
the
video
feed
on
D
web
and
we
try
to
pull
the
stream
from
YouTube
and
push
it
into
ipfs.
Unfortunately,
the
issue
they
had
was
actually
before
it
hit
YouTube.
So,
although
the
theory
work,
then
it
was
working.
It
didn't
really
solve
any
problems,
so
you
can
definitely
ways
of
you
know
pulling
the
stream
andreen
coding
it
and
things
of
that
nature.
B
We
are
going
to
end
here.
Thank
you,
so
much
your
go.
Ben
and
Ilan
for
presenting
and
sharing
with
us
and
hey
before
you
go
I
believe
a
link
to
our
weekly
newsletter
in
the
comments
and
I
would
like
to
thank
Leto
for
taking
dope
have
a
great
day
and
I
will
see
you
next
week
take
care
bye,
bye
thanks.