►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So,
coming
back
to
measuring
ipfs
another
thing
that
we're
working
on
in
Pro
last
teams
is
actually
trying
to
see
how
good
is
the
provider
record
liveness
in
the
actual
implementation
of
ipfs?
So
that's
going
to
be
the
topic
of
this
talk.
A
I
gonna
go
through
the
content
of
the
presentation.
Basically,
I
want
to
introduce
what
does
it
mean
actually
to
publish
content
in
ipfs,
which
are
the
parameters
that
we
take
into
account
to
polish
content
in
the
in
the
network
and
then
I'm
gonna
introduce
the
methodology.
I
was
following
up
to
try
to
measure.
How
well
is
the
the
likeness
of
the
over
provided
records,
and
then
we
will
just
go
through
the
whole
results
that
I
that
I
prepared.
A
Those
results
include
on
one
side
like
the
current
parameters
that
ipfs
is
using
and
I'm
also
gonna
introduce
like
different
variances
that
could
help
improving
it.
I,
don't
know,
let's
see
so
I
mean
over
these
little
few
months
that
I
started
working
on
the
on
this
topic.
I
just
realized
that
people
out
there
don't
really
know
how
ipfs
work.
They
think
that
when
they
want
to
provide
content
in
the
network,
the
content
gets
replicated
in
the
network,
which
is
wrong.
A
It
might
sound
weird,
but
people
think
that
it
works
like
that
that
people
is
storing
your
files
for
free.
It's
obviously
doesn't
work
like
that.
So
I
just
want
to
remark
that
when
you
want
to
police
a
Content,
what
you
do
is
you
divide
the
content
into
chunks?
You
start
creating
the
miracle
deck
so
that
each
of
the
blocks
and
each
of
these
classes
contain,
are
identified
with
a
Content
ID
and
once
you
have
that
content
ID,
what
you
really
pull
is
to
the
network.
A
It's
the
link
between
that
Content,
ID,
I
think
that
it's
just
the
hash,
but
you
link
your
content,
ready
to
your
peer
ID
and
those
are
named
as
provider
records.
So,
basically,
you
are
not
publishing
the
content.
You
are
just
publishing
the
link
between
them
content,
you
as
a
provider,
so
that's
more
or
less
what
you
do
the
way
you
do
it
in
Academia.
They
they
use
this
K
value,
which
is
a
bit
ambiguous
because
there
are
three
different
Keys,
okay
values.
A
In
this
case,
in
this
case,
this
key
value
is
just
to
how
many
peers
do
I
share
these
provider
records
so
that
if
someone
wants
to
retrieve
some
content,
they
are
gonna
hit
in
a
higher
chance.
The
well
me
as
a
provider,
so
this
is
sorry
link
the
current
implementation
of
ipfs.
This
key
value
is
set
to
key
equal
20.,
so
it's
like
whenever
I
want
to
publish
that
I'm,
gonna,
host
or
I
want
to
publish
a
Content
I'm
going
to
replicate
the
provider
records
to
20
years.
A
The
way
you
choose,
the
K
closest
peers
is
using
the
extra
distance,
so
it's
just
that
everybody
can
end
up
knowing
which
peers
have
the
provided
records
and
the
way
you
so
sorry
when
you
publish
those
records
to
the
network
by
a
specification
in
ipfs,
they
are
gonna,
just
be
retrievable
for
24
hours.
This
way
the
network
is
not
overloaded
with
content
on
the
past
of
Records
on
the
past.
So
for
that
you
have
to
do
what
they
call
like
a
republish.
A
So,
if
I
want
to
keep
publishing
the
content
for
a
longer
period,
I
have
to
keep
republishing
the
the
provider
records
to
the
K
closest
peers
every
what
they,
what
they
do
it
now
at
12,
that's
something
we
will
discuss
later
on,
whether
it's
a
proper
value
or
not.
But
the
way
is
the
way
it's
done
is
like
you,
you
start
replicating
or
or
yeah,
extending
the
the
availability
of
those
records
in
laps
of
20
12
hours.
This
serves
several
purposes.
A
The
first
one
is
that
by
recalculating
them
you
are
recalculating
again
whether
there
are
like
new
PS
that
are
closer
to
the
content,
and
it's
also
a
way
to
avoid
like
notion
the
fact
that
you
are
contacting
them
again
to
put
the
records.
A
So
given
that
there
are
few
questions
that
are
raised
based
on
how
or
which
is,
the
current
state
of
the
provider.
Lack
of
laminess,
which
is
like,
are
those
provider
record
liveness
actually
retrievable
during
those
24
hours,
which
is
like
the
theoretical
time
that
they
should
be
available?
A
How
many
of
those
initial
provider
record
holders
the
one
we
conducted
originally
are
actually
keeping
those
records
over
these
24
hours
because,
like
Janice
was
presenting
before,
like
the
network,
has
like
a
certain
degree
of
no
turn
which
is
respectable
in
any
P2P
network?
But
what?
If
the
node
turn
is
too
high
and
after
I
don't
know
four
hours?
There
are
not
peers
with
the
provider
records
in
the
network.
That
would
mean
that
there
is
no
link
between
me
as
a
content
provider
and
people
that
want
to
retribute.
A
Coming
back
to
many
questions
that
are
out
there,
like
the
concerns
about
hydras,
like
of
course,
hydras,
take
like
a
huge
part
on
this
ipfs
network
or
the
current
implementation,
so,
like
our
hydras
actually
boosting
up
the
entire
provider,
records
likeness
is
like
it's
the
entire
network,
relying
on
high-res.
No,
yes,
spoiler,
not
that
much
and
then
like
are
actually
like,
which
is
the
I
call
them
I
call
it
in
degree
right
here.
A
It's
like
from
those
original
provider
record
holders,
which
is
like
the
key
peers
that
we
select
at
the
beginning,
how
many
of
those
peers
are
actually
the
closest
ones
to
that
CID
over
those
24
hours
or
those
12
hours
where
I'm
gonna
republish
it.
So
these
are
the
things
that
I
would
like
to
cover
with
the
presentation.
A
I'm
gonna
go
now
through
the
through
the
data,
but
I
would
like
it
to
be
more
open
for
everybody.
I
just
want
to
say
the
data
and
try
to
see
how
do
you
think
or
how
you
feel
about
it
so
feel
free
to
stop
me
and
ask
whatever
you
want
at
any
point
so
yeah
going
through
the
methodology
I
created
like
a
tiny
CID
holder,
which
basically
generates
like
a
random
set
of
cids,
so
that
we
can
cover
the
whole
house
space
whenever
we
have
those
cids.
A
A
I
try
to
see
whether
they
keep
their
records
and,
in
a
parallel
way,
I'm
trying
to
see
whether
the
content
is
still
retrievable,
making
like
the
DST,
lookups
and
I,
also
calculate
which
are
the
closest
K
peers
at
that
specific
time,
like
every
30
minutes,
so
yeah,
let's
go
for
the
measurements
in
so
the
publish
method
of
of
the
academic
implementation
go
it's
making
this
logic
behind,
it's
like
it's
calculating,
which
are
the
closest
Pairs,
and
then
it's
trying
to
Ping
them
straight
away
to
put
the
records
on
them.
A
It's
not
such
a
straightforward
operation
and
it
might
take
some
hops.
But
sorry
in
the
end
like
this
is
the
results
that
I
get
this
graph
shows
like
for
those
20
peers
that
are
supposed
to
be
conducted.
We
can
see
that
fifty
percent
of
the
times
we
just
get
like
18
successful
connections.
This
means
that
in
half
of
the
times
there
are
always
two
beers
that
are
not
reachable
to
actually
put
the
records
on
them.
Of
course,
this.
This
is
not
something
extremely
bad
like
that's.
Why
key
was
oversaid.
A
A
We
can
still
see
that
we
always
have
more
or
less
16
pair,
which
were
active
and
that
where
in
which
we
could
actually
put
the
records
following
up
with
the
time
of
this
whole
process,
generism
was
already
introducing
the
fact
that
it
takes
too
much
time
to
put
the
records
out
there
like,
if,
if
ipfs
really
wants
to
overcome
and
try
to
be
faster
like
we
need
to
focus
on
improving
this
whole
time
that
it
takes
to
put
the
records,
because,
if
I
mean
one
CID
is
like
a
really
tiny
in
size,
like
the
representation
of
the
block
size.
A
So
for
that,
CID
is
really
tiny.
So,
when
you
are
making
this
whole
chunk
of
a
file,
you
are
not
generating
just
like
few
cids
like
I'm
talking
that
if
you
have
like
a
gigabyte
like
they
can
be
like
thousands
of
cids
and
if
it
for
each
CID.
You
have
to
wait
in
median
like
12
seconds
and,
like
that's,
not
really
like
a
something
that
you
can
scale
that
that
easily,
if
you
want
to
adopt
this
like
a
in
a
higher
way.
So
this
is
the
distribution
that
I
got
when
I
was
doing.
A
This
I
started
with
k
equal
20
that
most
of
the
times
like
the
the
radio
is
between
six
seconds
and
22,
but
like
the
the
19th
percentile
is
actually
like
44
seconds.
This
means
that
the
whole
process
of
publishing
will
just
block
my
machine
for
40
seconds,
just
to
put
this
one
CID,
which
is
like
something
that
I
think
that
we
should
take
into
account
if
we
really
want
to
to
scale
it
up.
A
Okay,
so
for
those
20
closest
peers
that
I'm
contacting
like
coming
back
to
the
distribution
of
client
ID
and
the
impact
that
hydras
have
hiras
were
set
in
the
network
to
try
to
equally
increase
the
performance
of
the
whole
house
space.
So
we
can
see
that
the
the
distribution
I
would
say
that
is
called
healthy.
It's
like,
we
still
have
like
the
two
peers
on
average,
more
or
less
that
are
offline.
A
We
do
have
like
a
huge
set
of
go
implementations
or
now
Google,
but
we
do
see
that
hydras
take
like
a
yeah
like
a
15
20
of
the
of
the
entire
like
calculus
appears.
I,
don't
think
it's
bad
I
will
come
back
to
this
like
few
slides
later,
when
we
see
the
whole
distribution,
but
I
think
that
it's
good
to
point
out
that
hydras
are
actually
not
overtaking
the
entire
like
beers
that
got
their
records.
A
So,
as
I
said,
like
I,
try
to
Ping
each
of
the
provider
record
holders
every
30
minutes,
and
this
is
the
whole
distribution
of
whether
they
were
active.
I
I
want
to
make
a
clear
distinction
if
they
have
or
if
they
are
active
and
whether
they
have
or
keep
their
records.
A
So
this
is
just
for
when
they
are
active
and
we
can
see
that
the
node
turn
is
not
really
affecting
much
the
original
state
in
the
sense
like,
of
course,
there
is
like
a
huge
variety,
but
that's
also
something
that
we
saw
at
the
beginning,
but
we
can
see
that
it's
quite
stable
over
time.
It's
like
over
these
38
hours
that
it
was
taking
the
the
whole
process
we
more
or
less
have
the
same
distribution
which
might
raise
like
things
like.
A
Okay,
like
it's
actually
the
no
turn
the
way
we
are
measuring
in
the
proper
way
of
measuring
it.
I
mean
I
would
say
that
it's
quite
healthy,
anyways,
the
the
provider
record
liveness
but,
of
course,
like
coming
back
again
to
the
hydras
I,
wanted
to
make
a
clear
distinction
and
analyzing
whether
these
peers
that
are
actually
making
stability
in
the
network
are
high-res
or
not
so
I
try
to
differentiate
them
in
different
graphs.
A
So
here
we
have
like
the
whole
set
of
non-hydra
PS,
which
are
like
quite
BFS
most
most
of
the
times,
and
we
can
see
here
a
wide
variety
on
the
distribution.
But
once
again
it's
quite
stable.
It's
like
I
think
that
there
isn't
much
to
fear
in
ipfs.
A
I
will
say
that
it's
quite
stable
overall
will
there
it's
not
a
side
effect,
but
the
hydras
are
extremely
stable,
but
that's
also
their
purpose
like
so,
if
we
actually
measure
like
the
or
if
we
actually
take
a
look
of
not
the
activeness,
but
whether
I
could
retrieve
the
provider
records
of
that
those
cids
that
I
was
publishing.
A
We
got
the
same
stability
and,
as
I
said
before,
like
at
hour,
24
like
there
is
a
sudden
drop,
which
is
what
you
will
expect.
It's
like.
That's
the
way
the
network
has
to
filter
out
content
that
is
no
longer
wanting
to
share,
let's
say
so,
yeah
once
again,
it's
quite
stable
and
there
are
no
points
or
outliers
yeah.
Sorry.
B
Quick
clarification
about
the
dots,
maybe
it
would
have
to
the.
A
B
I,
like
yeah,
the
outliers
are
usually
but
do
we
know.
For
example,
when
we
see
one
dog
like
how
many
beers
are
there,
how
many
beers,
but
how
many
times
it
was
measured
to
be
like,
like
five
or
six
or
seven
do.
A
We
can
still
see
that
there
are
circles
that
are
way
wider
than
the
other
ones.
I
I
don't
have
the
data
right
now,
but
I
could
take
a
look
at
India.
But
the
interesting
thing
here
is
that
there
were
no
Liars
with
at
zero.
So
it
means
that
there
is
always
someone
keeping
the
content,
something
that
I
don't
know
if
you
already
realized,
but
I
was
saying
that
the
provider
records
shouldn't
be
shared
after
24
hours,
but
this
graph
shows
that
there
is
people.
A
There
is
still
peers,
sorry
sharing
them
over
24
hours,
which,
coming
back
to
the
previous
division,
between
non-hydra
nodes
and
Hydra,
nose,
spoiler
guess,
which
one
was
sharing.
A
The
content,
no
I
mean
once
again
like
same
instability,
I
think
it's
it's
a
healthy
way
of
doing
it,
like
almost
no
non-highra
beers
like
dropping
to
zero,
coming
up
Liars
that
weren't
sharing
for
one
CID,
but
we
can
clearly
see
this
abrupt
drop
of
pries
sharing
with
my
tool,
however,
for
address
it
takes
like
a
like
longer
time
to
actually
stop
sharing
it,
and
you
can
see
that
there
are
still
layers
giving
it
to
it.
A
So
if
you
take
out
the
whole
retrievability
what
it
should
drop
from
one
to
zero
for
each
CID,
it's
actually
just
the
content
is
still
retrievable,
because
as
soon
as
there
is
one
person
out
there
giving
you
the
records,
you
can
actually
get
the
records
so
yeah
just
just
to
point
it
out,
so
something
that
I
would
say
that
I
was
also
measuring
is
like
how?
A
How
close
are
those
initial
beers
over
that
time?
For
that
specific
CID-
and
this
is
the
in
degree
that
I
was
talking
about
it's
like
at
the
beginning.
We
have
that
the
20
closest
pairs.
Of
course
they
were
the
closest
ones.
Otherwise
we
wouldn't
have
chosen
them,
but
we
can
see
that
over
I'm
sorry,
we
can
see
that
over
the
time,
the
the
radio
of
in
degree
it
doesn't
go
too
low.
A
So
this
is
something
that
the
Janice
and
I
were
actually
telling
is,
like
is
12
hours,
maybe
too
too
short
for
reproviding
the
records,
because
we
see
that
peers
are
active
in
the
network
and
they
are
still
inside
the
closest
ones
to
the
to
the
CID.
So
once
again
yeah,
it's
like
it's
something
that
it
continues
even
after
24
hours,
so
I
think
that
it's
worth
it's
worth
opening
the
discussion,
whether
we
should
increase
the
republishment.
A
Maybe
it's
just
like
a
safer
choice
to
keep
it
than
that
and
make
sure
that
you
are
combating
the
no
turn
and
and
I
don't
know
like
outliers
also
so
that
sometimes
you
are
not
that
close
at
that
moment,
so
yeah
and
the
second
part
of
the
result
that
I
wanted
to
present
was
actually
comparing
like
different
key
values.
It's
like
there
was
okay
in
ipfs,
everybody
agrees
that
it
works,
but
there
are
some
parameters
that
they
were
just
put
there
and
no
one
actually
measured
whether
they
are
the
best
option
or
not.
A
So
we
wanted
to
check
for
for
this
case
like
if
the
key
replication
value
is
actually
the
best
one
for
ipfs
at
this
moment,
and
this
is
more
or
less
the
same
graphs.
That
I
was
presenting
right
now
for
key
equal
20
for
key
equal,
15,
20
and
25
putting
together.
So
we
can
see
that
the
distribution
is
more
or
less
the
same.
It's
just
that.
You
are
shifting
it
over
more
more
peers
like
but
yeah
I
I
have
to
mention
here
that
k-15
and
k-152
is
yes.
A
That
I
was
running
again,
the
same
key
value
to
actually
see
whether
it
was
just
like
a
random
coincidence,
and
my
measurement
was
specific
to
that
point.
So
it's
more
or
less
to
see,
which
is
the
range
that
we
are
talking
about
for
the
measurements
and
okay.
Reducing
key
to
15
has
a
few
advantages.
I
will
say
that
since
you
have
to
contact
or
connect
less
people,
it's
probably
going
to
be
faster
and
at
the
same
time
you
are
going
to
put
less
overhead
to
the
network.
A
It's
like
less
people
is
keeping
the
records,
so
you
have
to
contact
less
people,
so
there
is
less
less
traffic
on
the
network.
But
here
what
I
can
see
is
that
the
the
distribution
of
the
time
that
it
takes
to
publish
those
records
to
the
closest
peers
doesn't
get
that
huge
benefit
for
the
I
mean
in
equal
key
equal
to
any.
We
saw
that
there
are
almost
no
odd
Liars
giving
you
the
records.
A
So
if
we
would
like
to
decrease
t
equal
to
15,
we
are
actually
shifting
that
graph
down,
and
we
are
watching
here
that
the
time
that
you
are
saving
for
providing
the
records
is
not
that
much.
It's
still
above
10
seconds.
Quick
for
me
would
be
like
the
like
a
nice
goal,
so
yeah
just
to
point
out
that
there
is
not
much
performance
Improvement
in
terms
of
the
time
that
you
are
it's
taking
for
you
to
polish
the
records.
A
Okay
for
the
active
peers
that
are
in
the
network.
We
still
see
that
there.
This
is
the
median,
and
this
is
the
average.
But
overall,
like
it's
the
same
stability,
the
one
that
I
was
mentioning
before
and
that's
what
I
meant
that
it's
just
shifted.
It's
like
I
think
that
it's
between
a
15
and
25
of
drop.
What
I
would
consider
like
effective
node
turn
for
the
provider
record
liveness,
so
yeah
ignore
these
drops.
A
It's
just
because
I
think
that
when
I
was
plotting
in,
there
were
no
specific
points
and
it
takes
them
like
I'm.
Just
gonna
drop
it
to
zero,
but
yeah
I
mean
I
would
like
to
know
more
about.
B
I
think
so
what
would
be
one
question
I
have
for
this
yeah
now
later
we
can
discuss
it.
What
are
good
metrics
to
monitor
or
say
when
K
is
set
to
15.
A
B
A
So
it's
I
think
that
reducing
K
to
15
would
have
that
that
sort
of
feeling
that
you
are
believing
with
the
network
in
the
sense
like
if
you
want
to
connect
to
a
peer
that
is
overloaded,
it's
going
to
be
less
overloaded.
So
probably
you
will
be
able
to
connect
to
that
peer
like
in
a
higher
chance
like
something
that
I
realized
doing.
The
study
is
that
Whenever
there
is
a
sudden
drop
off
or
an
increase
when
I'm
trying
to
connect
to
pairs
of
timeouts
just
opening.
A
The
connection
is
that
at
the
same
time,
there
is
a
same
equally
peak
of
connection
refuses,
which
makes
me
think
it
makes
sense,
because
if
the
whole
traffic
of
the
network
was
spread
between
10
000
nodes
and
now
I
just
have
eight
thousand.
It's
like
those
a
thousand
are
taking
the
traffic
of
those
two
thousand
that
were
just
left,
so
I
think
that
in
that
sense,
like
15
will
make
sense.
A
But
at
the
same
time
it's
like,
we
can't
see
a
clear
difference
between
the
stability
that
hydras,
which
is
mostly
like.
If
you
connect
to
a
Hydra,
and
you
put
that
record
on
the
Hydra,
it's
just
gonna
stay
there
because
they
barely
go
off.
However,
with
notes
like
you
still
rely
on
people
not
turning
them
down
when
they
go
to
sleep.
A
So
if,
if
you
reduce
the
amount
of
peers
that
that
are
keeping
the
records,
you
are
relying
even
more
on
hydras
somehow
is
like
hydras
serve
these
bottom
mattress
that
it
ensures
that
the
records
almost
don't
go
down.
A
So
it's
like
I,
don't
think
it's
bad!
It's
just
that!
You
are
incentivizing
certain
degree
of
centralization
about
ipfs,
so
I
mean.
On
the
other
hand,
you
have
k
equal
25,
which
is,
of
course
it's
gonna,
take
more
overload
and
I
I.
Don't
think
it
makes
sense
at
this
point
with
these
Network
size
to
do
it,
but
yeah
I
mean
okay.
A
This
is
a
trade
office
like
you
just
want
to
be
one
guy
or
peer
keeping
the
provider
records
for
you,
although
if
you
have
10,
you
are
sort
of
reducing
the
the
number
of
hops
that
you
will
have
to
do
it
because
you
do
not
have
to
find
the
closest
one
that
have
it
like.
You
still
have
10
different
options,
so
I
think
that
in
that
case
like
we
should
probably
try
to
speak
with
the
the
implementation
guys
and
try
to
see
like
where
do
they
see
ipfs
in
the
future?
C
A
A
A
Yeah
I
mean
like,
of
course,
it's
like
an
experiment
that
takes
like
30
to
40
hours.
Right
so
I
mean
you
are
still
affected
by
the
the
night
shift.
So
it
might
happen
that
if
all
your
peers
are
or
most
of
your
of
the
network
is
based
in
China
and
China
goes
to
sleep
like
you
will
see
affected
by
that.
So
that's
I
think
that
also,
why
fluctuates
a
bit.
C
C
A
No,
no,
no
okay,
I
I,
don't
I!
Think
I
I
forgot
to
mention
that
I
was
not
republishing.
I
mean
republishing.
It's
just
like
for
for
I
was
thinking
like
okay.
It
makes
more
sense
to
me
to
actually
bullish
more
cids
than
publishing
less
and
republish
it,
because
when
you
republish,
you
are
gonna,
probably
end
up
at
the
beginning
of
the
distribution.
So
I'm.
A
A
And
it's
it's
more
or
less
the
same.
It's
like
the
I'm,
just
gonna
go
straight
away
to
the
to
the
conclusions.
It's
like
my
my
measurements
from
the
network
show
that
their
provider
record
liveness
is
actually
healthy.
A
It's
like
I
didn't
face
and
I
was
I
think
that
I
was
generating
a
lot
of
traffic
in
ipfs,
which
I
feel
bad
for
that,
but
at
the
same
time
I
had
to
do
it
like.
So.
My
perception
for
for
this
study
is
that
I
didn't
find
any
point
where
I
couldn't
find
my
records,
so
it's
like
just
for
that.
I
will
say
that
is
completely
successful
and
at
the
same
time
like
we
still
have
like
several
certain
degree
of
margin
of
peers
that
will
drop
off.
A
A
It's
like
you,
have
a
huge,
a
higher
Variety
in
terms
like
peers
that
go
down
or
or
just
powered
up,
but
I
mean
it
doesn't
affect,
because
if
some
peer
goes
down,
but
the
other
one
comes
up
is
like
the
average
is
that
there's
still
one
beer,
so
yeah
yeah
regarding
the
in
degree,
ratio
is
like
I
think
that,
instead
of
considering
reducing
k-15
I
think
that
it
would
make
more
sense
try
to
see
whether
we
can
increase
the
republished
space.
A
It's
like
records,
I,
don't
know
if
it
was
just
that
they
were
scared,
that
maybe
a
lot
of
PS
will
go
offline
and
you
just
want
to
make
sure
that
every
20
seconds
you
choose
new,
closest
peers,
and
you
tell
them
to
keep
them
again,
but
I
mean
if,
if
the
same
stability
of
the
network
and
the
in
degree,
ratio
stays
over
40
hours,
maybe
like
12
hours,
is
like
too
short,
but
I'm,
of
course
like.
This
is
just
like
an
open
discussions
like
I.
Don't
want
to
it's
just.
A
It
should
be
take
it
into
account
like.
Where
do
they
see
last
asset
before,
like
ipfs
like
in
two?
Three
years
like
what
do
you
want
to
do
like?
Do
they
want
to
increase
performance,
or
do
they
want
to
prioritize
decentralization
like
you
can't
have
everything,
at
least
at
this
point,
so
so
yeah
once
again,
like
I'm,
also
gonna
try
to
have
the
the
the
report
done
in
one
two
weeks,
so
please
keep
with
the
keep
it
up
with
the
measurements.
A
D
Do
you
think
of
having
the
reputation
being
handled
by
the
note
that
is
pinning
the
content,
which
means
that
if
you
know
that
you're
gonna
publish
a
popular
file,
then
the
writing?
It's
gonna
need
some
more
balancing,
so
I
said:
okay,
I
want
to
repeat
it
in
40
times
or
if
you
think
that
it's
not
important
I
mean
just
moving
to
be
online
or
you.
A
Yeah
I
mean
I
I.
Do
think
that
it
would
be
really
nice
to
have
this
key
replication
value,
which
is
dynamically
changing
because
the
network
it's
growing
and
decreasing
over
time.
So
what
now
20
it
might
be
a
good
value.
Maybe
when
we
have
like
four
times
more
notes,
it
doesn't
make
that
much
sense.
So
it's
like
I
think
that
my
ideal
solution
would
be
that
each
node,
from
their
representation
will
have
their
own
key
value
and
that
key
value
will
be
affected
or
okay.
A
If,
for
example,
12
hours
ago,
I
was
trying
to
Ping
like
20
pairs
and
I
could
only
reach
it
to
10,
maybe
I
have
to
increase
my
key
value
so
that
I
can
ensure
that
at
least
15
players
have
it
instead
of
10.,
and
it
will
be
also
affected
by
the
fact
that,
if
I'm
hosting,
like
a
really
popular
content,
I
might
consider
also
increasing
key
value,
which
means
that
if
I
mean
if
a
lot
of
people
is
hitting
this
YouTube,
video
and
I
replicate
it
to
more
peers.
More
peers
have
their
provided
records.
A
It
means
that
they
have
to
do
less
hops
to
actually
find
the
content,
which
also
increase
the
performance
of
the
network,
but
it
just
it
doesn't
add
the
whole
overload
to
set
up
all
the
key
values
of
all
the
nodes
to
40..
So
I
think
that
we
should
try
to
see
whether
there
is
worth
like
a
locality
or
a
key
Dynamic
key
value
for
each
of
the
nodes
based
on
its
vision
of
the
network
and
their
experience
like
with
the
cids
they
are
providing
so
yeah.
A
A
You're
gonna
publish
it
as
well,
but
we
are
gonna
split
the
request
between
both
of
us.
So
that
means
that
we
are
gonna
feel
that
there
is
less
people
pinging.
The
content,
so
I
can
actually
decrease
my
key
value,
but
you
will
also
do
it.
So
it's
like
it's.
It's
a
I,
think
that
is
a
nice
way
to
dynamically.
Adjust
the
the
you've
got
to
be.
C
A
That
would
mean
that
people
that
want
to
have
access
to
the
node
will
actually
have
like
several
I
mean
and
coming
back
to
the
rtt.
Maybe
we
should
actually
prioritize
latencies.
So
if,
if
more
people
spread
the
the
actually
provide
the
content,
you
might
just
be
able
or
wanted
to
find
this,
the
one
that
is
closer
to
you,
yep
I,
think
that
someone
was
saying
something
other
nope.
C
I
had
a
question:
are
you
done
yeah
right,
so
you've
shown
the
graph
earlier
and
I
think
it
was
one
of
the
first
one
ways
where
we
saw
the
shop
drop
in
this
one
yeah
on
the
left,
hand,
side
and
I
mean
they're
still
beyond
the
24th
of
our
some
outliers.
A
A
So
I
checked
the
versions
The
Virgins,
knocking
at
that
door.
Like
nobody
I
mean
the
hydras
is
the
7.4
version,
which
is
the
latest
one
and
I
I
had
the
chance
to
talk
with
Gus,
and
he
was
telling
me
that
I
mean,
of
course,
Hydra
has
had
like
a
huge
set
of
Provider
records.
So
he
was
telling
me
that
it
might
be
due
to
the
fact
that
it's
take
them.
A
It
takes
them
a
lot
of
time
to
garbage,
collect,
provide
the
records
that
so
that
will
explain
why
it's
not
so
sharp,
but
we
can
still
see
that
there
is
a
an
attempt
to
actually
not
share
it
in
the
end
like
as
soon
as
one
has
it
like,
everybody
can
access
the
content
so
so
yeah,
no,
no,
no,
no,
no
I
mean
as
soon
as
at
one
point.
They
actually
stop
sharing
it.
It's
mostly
because
they
are
also
saving
space
in
their
own.
B
B
Yeah
I
would
doubt
that
it
would
be
something
that
you
definitely
want
to
stop
sharing
after
24
hours.
A
A
B
I
agree,
I,
agree
yeah,
it
could
be
like
I,
don't
know.
Ideally
it
would
depend
if
it's
a
website
I
mean
the
realness.
I
would
say
you
know,
stop
providing
records,
and
after
a
week
or
a
month,
please
you,
probably
you,
probably
still
want
it
to
be
there
after
a
week.
You
know
it's
a
website
you're
showing
something
so.
A
I
mean,
and
in
the
other,
in
the
other
side
like
if
you
want
to
share
the
picture
of
your
kitty
and
I
mean
and
and
not
you
just
said
it
once
so
that
someone
else
or
your
friend
don't
loads.
It
yeah,
you
are
just
gonna
shut
down
and
I
mean.
If
someone
hits
you,
you
are
gonna,
say
I,
don't
have
the
content.
D
Electrical
is
popular
like
you
may
want
to
keep
it.
So
that's
the
different
strategy.
A
Yeah
I
mean
there
is
a
huge
one
why
we
know
of
improvement
several
aspects,
but
I
think
that
the
most
important
one
is
that,
for
example,
Janice
was
mentioning
about
the
optimistic
provide,
and
it's
like
I
mean
this
times
for
publishing.
One
single
CID
is
like
too
much
I.
Think
that's
what
I
would.
C
A
C
A
Was
I
think
that
it
was
above
10th
person,
it
was
9
10.,
They,
Go.