►
From YouTube: IPFS Camp - Edgar Lee of Netflix
Description
Learn how IPFS can be leveraged as a CDN for container image layers because the container runtime can be modified to retrieve layers identified by their CIDs.
Learn more about IPFS Camp: https://camp.ipfs.io/
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
Hi
my
name's
Edgar
I'm
from
Hong
Kong
and
I,
currently
work
in
California
and
Netflix.
Previously
I
worked
at
docker
as
well,
so
my
focus
is
on
containerized
work,
clothes,
engineering
tools
and
infrastructure
at
Netflix.
The
project
I'm
working
on
is
actually
distributing
container
layers
through
ipfs.
We
run
a
lot
of
containers
at
Netflix
as
much
as
three
million
containers
per
week,
so
you
know
innovating
container
distribution.
That's
pretty
important!
A
The
reason
why
I
think
ipfs
has
a
potential
to
change
how
private
infrastructure
can
operate
is
because
in
traditional
architectures
for
distribution,
you
have
a
server
and
client
and
you
would
scale
at
the
server's
sort
of
horizontally,
put
a
low
bouncer
in
top
and
maybe
cache
it
with
a
memory
databases.
And
then,
once
you
need
the
traffic
to
go
to
cross
region
areas,
you
might
want
to
mirror
the
content
to
different
regions.
All
this
comes
at
exceeding
cost
and
complexity.
A
Ipfs
can
operate
as
a
CDN
because
we
can
actually
a
couple
the
definition
of
what
we
want
from
how
we
get
it
so
using
IP
FS.
Instead
of
getting
the
same
data
from
across
the
world,
you
get
it
from
your
neighbor
instead.
Another
useful
aspect
that
IPS
gives
is
the
ability
to
chunk
content
into
various
blocks
and
the
way
you
do
it
can
really
depend
on
the
data.
For
example,
if
we
are
splitting
up
container
layer,
if
we
do
it
on
the
foul
boundaries,
there's
a
greater
chance
of
deduplication
between
the
files.
A
If
we
have
greater
to
duplicating
between
files,
it
means
that
not
only
do
you
have
reduced
storage
requirements,
the
amount
of
network
traffic
you
need
to
get
the
remaining
data,
assuming
you
have,
the
duplicated
container
layers
already
will
be
much
lower.
So,
overall,
your
throughput
for
container
distribution
is
going
to
be
much
higher
even
sort
of
beyond
that.
A
They
are
downloaded
on
the
man
through
the
same
Peter,
pure
interface,
that
ipfs
provides
so
there's
a
paper
called
slacker
that
surveyed
about
57
different
containerized
applications,
and
they
found
that
only
about
6%
of
the
data
is
actually
read.
So
that
means
for
a
10
gigabyte
image.
For
example,
we'll
only
need
about
60
megabytes
to
actually
execute
the
container
I.
Think
one
of
the
most
interesting
characteristics
RIFs
is
that
is
able
to
operate
through
many
different
transports.
A
What
that
means
for
your
everyday
user
is
that
you
don't
have
to
use
the
internet
in
order
to
access
content
from
the
ipfs
network.
You
could
be
accessing
content
from
your
neighbor
through
bluetooth,
through
infrared
or
any
other
kind
of
transport.
The
Internet
is
one
way
in
order
to
access
the
Internet
access,
the
content
that
you
want,
but
that's
not
the
only,
and
on
top
of
that,
the
components
that
ipfs
is
built
out
made
it
incredibly
easy
to
find
others
that
have
the
content.
A
You
are
looking
for
through
all
the
means
of
transport
that
is
available
to
you,
so
you
could
be
looking
on
Bluetooth
Wi-Fi
and
the
greater
Internet.
At
the
same
time,
this
makes
it
much
more
efficient
to
distribute
content
as
well
as
much
more
faster
to
download
all
your
content.
I
think
for
video
transfer
in
particular
Netflix
has
his
own
sort
of
super
hyper
optimized
infrastructure,
so
the
use
cases
I
imagining
for
ipfs
and
Netflix
is
more
catered
towards
distributing
software
that
powers
Netflix
rather
than
the
video
transfer
itself.
A
We
have
actually
these
things
called
open
Connect,
which
is
hardware
that
we
install
where
your
internet
providers
are
providing
you
Internet
Netflix
is
providing
you.
The
video
content,
so
I
think
that's
particular
niche
is
not
going
to
be
fulfilled
by
a
VMs,
I,
already
sort
of
envision
a
future
where
sort
of
BitTorrent
or
even
Peter
pure
protocols
can
share
built
caches
such
that.
A
If
I
start
building
my
project
in
the
cloud
and
I'm
up
to
50
percent
and
a
colleague
wants
to
join
in
and
compile
the
same
project,
they
should
jump
straight
to
50
percent,
assuming
they're
building
on
the
same
git
repository
and
on
the
same
git
commit.
So
this
problem
seems
like
it
should
be
theoretically
possible,
just
sort
of
four
years,
like
four
years
ago,
the
protocols
for
Peter,
pure
connectivity
wasn't
as
their
market
eyes.