►
From YouTube: Scalable Decentralized Video Transcoding
Description
In this presenation from Day 2 of the #SwarmOrangeSummit, Livepeer’s Eric Tang shows the principles behind Livepeer's decentralised and censorship-resistant video. The basic idea is that anyone, who can offer computational resources for transcoding, can join the network and provide value. The presentation then follows up with an overview of the general technical traits behind Livepeer, which in essence works as a transcoding market, and highlighted some of the issues that they still need to resolve.
A
Alright,
let's
get
started.
Thank
you
for
thank
well,
first
of
all,
I
wanna,
thank
all
the
organizers
for
putting
this
up.
Putting
up
this
event,
it's
really
great
to
be
here
my
first
time
in
Slovenia
and
it's
great.
My
name
is
Eric
tang
and
I'm
from
the
live
here
team
and
just
a
side
note
I
will
be
I
am
live-streaming.
This
talk
right
now
at
Eric.
X,
hang
that
life
here
at
on
TV.
So
if
you
go
there
you'll
be
able
to
see
this
stream
live.
A
So
what
I
like
to
talk
about
today
is
it's
a
specific
component
inside
life
here,
which
is
the
scalable
decentralized
video
transcoding
infrastructure
that
we're
building
and
can
I,
have
a
show
of
hands
and
see?
Who
here
knows
what
life
here
is
about
all
right:
cool,
roughly
half
the
room,
so
life
here
is
a
decentralized
video
streaming.
Video
live
streaming
platform
and
the
idea
here
is
that,
instead
of
using
a
traditional,
centralized
service
provider
that
runs
on
AWS,
we
we
allow
anyone
to
bring
their
hardware
and
join
the
network
and
dissent
in
the
decentralized
way.
A
Do
video
live
broadcasting
on
the
internet
and
there
are
a
few
problems
in
this
host
space,
because
you
know
the
end
to
end.
Video
broadcasting
system
is
quite
a
complex
system
right.
So
one
problem
that
I'm
going
to
talk
about
specifically
today
is
transcoding
for
people
who
don't
really
know
what
transcoding
is
and
why
it's
important
what
we
are
watching
a
video
on
the
Internet
so
say
you
go
to
youtube
and
watch
a
video.
A
There
are
many
different
bit
rates
on
many
different
floor,
Matis
of
the
same
video
that
are
available
in
youtube,
so
that
when
you
say
when
you're
walking
outside
with
a
cell
connection
or
when
you're
sitting
at
home,
where
the
10
gigabits
fiber
connection,
you
can
you
can
always
view
the
video
in
the
best
quality,
that's
available
to
you
at
that
moment,
and
it
also
accounts
for
many
different
devices.
Many
different
types
of
players.
A
So
when
you're
upload
that
one
video
into
YouTube
actually
the
first
one
of
the
first
thing
it
does-
is
that
it
transcodes
this
that
video
into
many
different
bit
rates
are
many
different
codecs,
so
that
any
anybody
in
the
globe
can
watch
that
video
right
and
that
transcoding
job
is
very
expensive,
competent
and
yet
from
a
computation
standpoint.
In
fact,
YouTube
is
the
number
one
largest
cpu
consumer
in
the
entire
Google
infrastructure,
because
because
people
are
uploading,
thousands
of
thousands
of
hours
every
every
hour
right.
A
So
today
we
can,
you
know
today
we
do
video
transcoding
in
a
in
a
centralized
way
right,
so
we
host
a
service
on
Amazon
or
on
a
cloud
hosting
service.
We,
you
know
stream
the
video
from
our
phone
or
from
a
laptop
from
any
camera
into
the
video
into
the
hostess
service,
with
some
video
port
protocol,
and
then
it
gets
transcoded
and
then
gets
sent
somewhere
else,
but
in
a
decentralized
workflow.
A
What
we
want
to
do
is
we
want
to
have
anybody,
be
able
to
put
their
hardware
into
a
network
and
do
the
video
transcoding
for
anybody
on
the
on
the
other
side
of
the
open
market
who
wants
to
get
their
video
transcoding
right,
and
why
is
this
powerful?
That's
how
first
because
number
one
it
gives
oaken
open
access
to
anybody
who
can
provide
who
can
provide
Hardware
right.
So
that
means
anybody
who
has
any
kind
of
computation
capacity
sitting
around,
can
join
this
network
and
and
and
provide
value
and
be
paid
in
exchange.
A
It's
dynamically
scalable
based
on
supply
and
demand,
and
this
is
a
very
important
concept,
because
video
live-streaming
is
notorious
for
for
the
the
peaks
and
valleys
that
it
has.
When
a
popular
event
happens,
many
people
want
to
watch
the
same
stream
and
usually
you're.
Usually,
the
network
is
not
being
used
that
much,
but
as
a
provider
of
a
streaming
service,
you
end
up
having
to
always
maintain
the
Headroom,
because
you
have
to
account
for
the
peaks
and
that
means
you're
always
spending
money
on
hardware
that
you're
not
using
right.
A
So
if
we
look
at
life
here
as
a
transcoding
market,
this
is
this
is
what
it
looked
like.
I
showed
a
picture
similar
to
this
in
in
Def
Con
and
Ed
Kahn.
But
this
this
version
is
looking
at
just
the
transcoding
piece
like
so
just
to
walk
through
how
the
protocol
works
quickly.
First,
the
transcoder
would
advertise
the
capability
and
a
price.
A
So,
for
example,
it
says
I'm
able
to
transfer
this
one
video
from
this
bitrate
into
this
many
bit
rates
right
and
then
this
is
a
price
I
charge
for
doing
this
work
and
about
and
in
any
any
transcoding
can
occur
and
kind
of
come
in
and
do
this
and
advertise
their
capability
with
the
smart
contract.
And
after
that
a
broadcaster
can
come
in
and
say,
I,
video
and
I
wanted.
A
A
The
video
into
this
video
story,
a
video
storage
and,
at
the
same
time
the
transcoders
can
the
transporters
can
start
loading,
the
video
from
the
storage
and
start
doing
the
doing
the
work
and
after
doing
the
work
it's
generating,
what
we
call
claims
of
the
work
that
it
has
done.
Basically,
it
includes
the
signature
of
the
broadcaster
plus
the
hash
of
the
results
that
the
transcoder
has
done
and
as
it's
gathering
these
claims
for
every
single
video
segment.
A
So
if
we
look
at,
if
we
look
at
video
transcoding,
though
it's
a
form
of
paralyzed,
a
bowl
off
Chang
computation
right,
basically
something
that
cannot
something
that
cannot
fit
onto
the
etherium.
If
you're
in
virtual
machine,
we
we
have
to
basically
any
kind
of
competition.
Like
that,
we
have
to
do
it
off
chain
and
then,
through
some
verification,
do
that
on
change
just
to
verify.
So
if
we
so
right
now
we're
we're
going
to
talk
about
some
of
the
interesting
problems
that
that
we're
working
on.
A
But
it's
doing
the
work
kind
of
based
on
the
assignment
of
the
of
this
smart
contract
right
and
that's
kind
of
based
on
the
job
that's
coming
in
and
and
and
the
problem
here
is
that
there
there's
net
there
there's
not
necessarily
a
lot
of
scale
in
this
sort
of
work
flow,
because
the
transcoders
are
kind
of
being
being
assigned
jobs
and
they
have
to
do
the
jobs
and
they
kind
of
have
to
wait
for
the
next
job
to
come
in.
So
instead,
what
we're
working
on
right
now
is
ask.
A
It
is
scaling
this
architecture
through
what
we
call
a
transporting
race
and
what
that
is
is
instead
of
having
one
transcoder
to
do
the
job
we
have
the
we
have
a
role
called
an
Orchestrator
and
what
the
orchestrator
does
is
that
it
will.
It
will
act
as
the
transcoder
as
it
is
today
and
gets
the
job,
and
then
it
will
make
the
work
available
so
that
multiple
transcoders
can
do
the
same
job
and
they
have
to
race
to
get
the
job
to
get
the
job
done.
A
So
whoever
gets
the
job
done
quickly
and
correctly
wins
that
race
right.
So
imagine
this
in
the
concept
as
a
framework
of
like
mining
right,
the
idea
is,
people
are
competing
to
get
the
job
done
and
then
the
winner
of
that
competition
gets
the
reward,
in
fact,
because
transporting
happens
so
quickly.
Maybe
the
losers,
who
still
get
the
right
answer,
still
get
some
sort
of
reward
based
on
like
Uncle
block
yeah.
A
What
you
want
to
do
is
you
want
to
kind
of
spread
out
the
resource
so
that
you're
not
really
doing
work,
for
you
know
once
particular
job,
but
you
were
doing
small
chunks
of
work
for
many
jobs
right
and
if
you
spread
out
the
risk
it
means
this
is
the
entire
system
is
much
much
more
scalable
yeah
so
similar
to
like,
if
you're
running,
like
nice,
hash
on
your
computer
or
something
right,
you
were
like
my
any
money
change.
At
the
same
time,.
B
A
Yes,
so
they're
there,
many
different
scenarios
of
verification
that
that
we
can
talk
about,
but
here
well
what
I
want
to
you
know
kind
of
touch
on
this
right.
Here
is
the
scaling
factor
kind
of
comes
from
a
crypto
native
way,
where
people
are
competing
to
do
the
computation
and
being
rewarded
from
that
competition
and
that
incentivizes
that
that
incentivizes,
computation
efficiency
and
incentivizes
low
latency,
which
is
needed
for
for
video,
live
streaming.
A
So
there
are.
So
if
we
look
at
this
generalized
computation
market
right,
the
problem
here
really
lies
in
decentralize,
the
verification
right.
So
the
work
the
work
here
has
to
be
verified
and
that's
kind
of
the
tough
part
right,
because
now
we're
trusting
any
any
anonymous
Hardware
on
the
on
the
internet.
To
like
do
the
work
for
us.
How
can
we
trust
that
is
correct,
so
there
are
a
few.
It
is
a
few
ways
to
do
this.
A
A
Well,
if
you
have
a
computation,
that's
hard
to
be
broken
down
into
small
parts,
because
you
end
up
having
because
if
that's
the
case,
if
you
want
to
verify
it,
you
would
have
to
do
it
again
right,
but
the
bad
part,
of
course,
is
that
it's
inefficient
and
most
likely
it
will
never
reach
the
efficiency
of
a
centralized
system.
Who
only
has
to
do
the
work
once
and
that's
it
so
another
way
to
do,
it
would
be
to
say
get
the
requester
to
verify
right,
and
that
is
to
say,
if
I
want
to
get.
A
A
And
then,
if
I,
don't
trust
them,
I
can
just
do
my
homework
again,
but
then
that
requires
me
to
do
the
homework
right.
So
if
I
have
some
embedded
device
that
needs
to
do
that,
that
need
to
work
to
be
done.
Maybe
the
device
itself
does
not
have
the
capability
to
to
do
the
work.
Another
problem
here
is:
there
are
some
economic
attacks.
A
A
So
if
the
request,
if
the
requesters
are
the
ones
that
are
verifying
the
work,
essentially
you
open
up
to
a
simple
attack
vector
where
the
requester
can
collude
with
the
workers
so
that
the
requesters
can
say
I'm
gonna,
just
put
in
a
bunch
of
work
and
I'm
always
gonna,
say
the
work
is
correct
for
a
certain
for
certain
worker
right.
So
that
way,
I'm
artificially
inflating
the
amount
of
work
that
the
worker
has
done.
Therefore,
the
worker
will
get
a
lot
more
reward
even
there.
A
A
Anyone
here
know
what
drove
him.
What
you
know,
what
true--but
is
alright
I'll
explain.
So
true--but
is
this
idea
of
of
remembering
all
the
computation
steps
that
have
been
done
in
a
piece
of
computation
right,
so
what
they
have
done
is
they've
built
this
custom
interpreter
that
compiles
a
piece
of
computation
down
to
its
steps.
Imagine
if
I
have
you
know
if
I
compile
a
program
into
assembly
and
every
every
assembly
instruction
is
a
step
right
and
well
you.
A
What
you
can
do
is
for
every
single
step
calculate
the
hash
of
that
step
and
then
create
a
merkel
route
at
the
end
to
and
commit
that
merkel
route
on
chain.
So
now
imagine
if
I
was
a
solver
and
I
did
that
I
would
have
effectively
committed
my
result
on
chain
and
also
committed
every
single
step
of
my
computation
on
chain
now,
if
I,
if
I
and
and
by
the
way,
any
because
it's
on
the
blockchain,
anyone
can
see
this
happening
right.
A
C
A
B
A
Me
get
to
that
in
a
minute,
so,
if
you
want
to
so
so,
there
has
been
a
proof-of-concept,
that's
built.
If
you
want
to
check
it
out,
it's
it's
not
this.
This
URL
so
now,
I'm
gonna
get
to
the
non-deterministic
computation
verification
right.
So
in
video
transcoding,
sometimes
the
work
is
not
deterministic.
Imagine
especially
when
you're
running
on
GPU
because
of
the
because
of
the
floating-point
calculation.
So
here
we
need
to.
We
need
to
use
other
types
of
verification
instead
of
bitwise
verification.
A
So
so
these
are
things
like
video
fingerprinting,
which
is
basically
calculating
the
kernels
on
the
on
the
on
the
images,
so
that
you
can
we
can
do
is
compare
two
images
in
with
different
resolutions
with
and
two
to
see
if
they're
the
same
image
right
and-
and
this
is
you
know-
we've
we've
been
doing
some
research
in
this
in
this
area.
Another
way
to
another
I
guess.
Another
area
in
in
here
is
the
video
quality
score.
So
how
how
good
is
of
it
is
the
video.
A
A
So
right
now,
in
the
current
version,
the
protocol
kind
of
picks
the
verification
method
for
you,
which
is
this
like
bitwise,
decentralized
verification,
but
in
the
future,
I
should
be
able
to
write
a
verification
method
myself,
and
if
people
like
my
verification
method,
they
should
be
able
to
pick
my
my
method
and
and
you
and
use
that
instead,
so
so
that
was
the
verification
problem.
Anybody
have
any
questions
before
I
move
on
alright.
A
Another
big
issue
here,
which
is
an
open
problem,
is
the
data
ability
problem
right
and-
and
this
is
hard
because
imagine
if
I
am
imagining,
if
I'm
a
transcoder
and
I'm
competing
with
all
the
other
transporters
on
the
network,
I
always
have
the
incentive
to
make
the
data
not
available
for
them.
So
I
will
win
all
the
time
through
a
variety
of
different
attacks.
A
Right
and
in
fact
this
is
a
problem
that
exists
in
life
here
in
true--but,
even
in
aetherium
itself
right
this
data
very
problem-
they
don't
really
problem-
is
something
that
hasn't
been
that
hasn't
been
solved
yet.
So
that's
why
I'm
very
excited
here.
It's
warm,
maybe
swarm,
will
solve
that
problem
for
us
via
the
swaps,
were
in
swindle
scheme.