►
Description
Originally recorded during the Berlin Developers Meetings from July 9-13, 2018.
A
A
So
I
wanted
to
explain
to
Adrian
very
quickly
what
I
feel
have
a
PF
s
cluster,
that
shard
him
was
and
I
worked
and
we
spent
them
lunch
talking
about
it
and
I
went
eat
a
deep
dive
session
and
we
figure
out
like
two
small
details
which
we're
not
so
small
since
I
have
some
time.
I
will
explain
it
how
it
worked.
We
basically
want
you
allowed
to
add
huge
things
and
distribute
them
among
a
cluster
of
ipfs
demons.
A
We
initially
we
don't
know
how
big
the
content
is
that
we
want
to
add.
Normally,
you
will
think
that
you
will
pick
this
up
from
a
URL
or
something
so
you
will
just
start
reading
things
start
reading
this
archive,
which
is
it
in
somewhere
and
start
adding
it
and
as
we
added,
we
should
be
distributing
it
to
your
to
your
pastor
of
ipfs
demons.
A
That
means
that
it
is
difficult
in
this
mode
that
we
want
to
support
to
make
assumptions
about
pretty
much
how
big
your
tag
is
and
what
is
the
best
way
to
split
it
in
the
future.
There
will
be
cases
where
we
can
make
such
assumptions
and
we
can
take
the
flavor
options,
but
in
this
journalistic
case,
we
we
start
with
those
I,
don't
know
how
much
content
and
I
just
need
to
distribute
it
all
around.
A
What
we
do
is
that,
in
the
same
way
that
ipfs
chunks
and
builds
attack
as
it
receives
the
chunks,
we're
gonna
be
doing
the
same,
except
that,
additionally,
to
the
regular
attack,
we're
gonna
be
building
a
cluster
dag
a
cluster
dag
is
an
alternative,
an
alternative
duck
which
is
just
going
to
divide
the
original
ipfs
tag
in
charts
and
each
chart
is
going
to
be
a
bucket.
It's
going
to
be
the
unit
which
is
going
to
be
allocated
to
an
ipfs
demon
and
then
replicated
according
to
your
replication
factor.
B
A
B
A
Yes,
so
the
idea
is
that
your
computer
is
too
small
to
actually
process
this.
Okay
at
all,
you're,
just
picking
up
from
somewhere
you're,
adding
it
to
cluster
it.
Splitting
its
building,
the
diagonals
distribute
the
content
around
a
cluster
of
places
which
can
actually
take
the
full
size
and
replicate
it.
C
So
the
current
state
of
how
live
swap
sessions
work
they
basically
always
certain
stuff
in
in
order
right
so
in
in
this
graph.
If
I
start
from
the
root
CID
as
a
client,
the
cluster
in
the
middle
can
only
satisfy
the
first
two
notes
and
then
it
has
to
go
and
ask
somewhere
else.
And
how
does
this
mesh
with
what
the
vid
was
talking
about,
that
not
all
nodes
will
be
in
the
DHT.
A
C
A
A
Okay,
since
that
is
Marty's
clear
the
open
questions
where,
while
we
are
shouting,
how
do
we
ensure
that
we
are
feeling
the
storage
efficiently?
How
do
we
ensure
that
our
storage
metrics
are
up
to
date
and
that
we
are
sending
the
shards
to
the
right
place
in
a
way
that
it
works
and
then
not
the
shots
end
up
in
the
same
pier
etcetera,
etcetera,
and
why
are
the
new
data
structures
and
the
things
we
have
to
carry
around
with
them?
How
do
we
fill
them
in
the
process
back?
Are
there
any
other
questions?
Yeah.