►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
the
main
motivation
here
is
that
we
need
measurements,
because
without
measurements
we
we
cannot
improve
whatever
we
have
do
not
have
measurements,
for
we
cannot
find
bottlenecks
and
find
the
kind
of
slow
performing
components
of
protocols,
and
we
also
need
measurements
when
we
want
to
improve
something.
But
we
cannot
know
whether
the
improvement
that
we've
just
deployed
is
actually
improving
performance
or
degrading
performance
or
having
some
other
unwanted
kind
of
side
effect,
so
yeah.
So
this
is
what
this
team
is
is
doing.
A
The
the
the
effort
and
the
team
is
called
problem.
I
am
janis
I'm
going
to
introduce
myself,
I'm
a
research
scientist
at
protocol
labs
and
together
with
a
team
of
excellent
people,
rpl
and
outside
of
bl.
We
are
trying
to
make
this
thing
happen
to
have
an
ipfs
network
observatory.
A
So
yeah,
as
I
kind
of
implied
measuring
network
performance,
is
not
a
target
in
itself.
It's
of
course,
interesting
to
see
graphs,
many
of
which
you're
going
to
see
today
and
see
how
things
are
performing
out
there
in
the
wild.
But
the
ultimate
end
target
is
to
identify
bottlenecks
and
see
where
there
is
space
for
improvement
and
eventually
design
protocol
optimizations
and
make
protocols
perform
even
better
than
they
do
so.
We've
run
a
big
measurement
campaign
which
started
last
summer
is
still
ongoing.
A
This
is
only
a
part
of
it
just
because
we
had
a
nice
figure
for
that.
Basically,
but
we
still
this
car
is
carrying
on.
We
got
a
small
part
of
this
and
we
focused
on
that
period
there
and
we
dived
a
lot
deeper
into
the
measurements
that
we
collected
and
we
found
several
opportunities.
So
there
are
a
lot
of
kind
of
low-hanging
fruit
into
this,
one
of
which
is
the
provide
process
in
the
ipfs
network.
So
the
provide
process
is
when
you
want
to
publish
something
to
the
network
you.
A
It
takes
a
long
time
to
actually
get
that
public
and
others
are
able
to
go
and
get
the
data.
So
we
see
down
there
that
on
the
x-axis
is
time,
so
it
measures
the
time
that
it
takes
to
actually
have
content
available,
and
we
see
that
it
goes
more
like
not
not
even
tens
of
seconds
like
more
than
a
hundred
seconds,
and
this
is
a
process
where
the
network
is
trying
to
find
the
best
possible
peers
in
order
to
store
the
store,
the
content
with
to
store
their
provider
record
and
so
on.
A
So
what
we
actually
found
out
is
that
finding
the
peers
that
were
going
to
contact
you
know
in
order
to
make
the
content
public
is
found
in
less
than
a
second
right.
So
well,
even
if
not
all
of
the
peers,
most
of
the
peers
are
found
within
a
very,
very
short
time
period.
So
what
this
tells
us
is
that,
instead
of
having
to
wait
tens
and
hundreds
of
seconds,
we
can
actually
become
smarter
and
find
ways
where
we
can
reduce
the
provide
time
by
an
order
of
magnitude.
A
Another
thing
we
did
is
that
we
wanted
to
see
how
the
the
dht
lookup
process
works,
so
the
dhc
lookup
is
when
you
you're,
trying
to
retrieve
something
from
the
network.
So,
as
a
user,
you
you
go
and
request
for
cid
and
you
want
to
get
the
content
back.
So
there
are
several
kind
of
steps
that
are
taken
by
the
network
and
the
protocols
in
order
to
make
this
happen-
and
we
wanted
to
see
you
know
is-
is
some
of
these
steps
kind
of
problematic?
A
Is
it
going
easy,
some
step
that
is
taking
too
long?
You
know,
is
a
bottleneck
somewhere
and
so
on.
So
what
we
did
is
that
we
wanted
to
interact
with
the
main
network,
so
we
didn't
want
to
do
simulations
or
spin
up
just
five
or
ten
nodes
and
play
around
with
them.
We
wanted
to
interact
with
the
actual
network,
so
what
we
did
was
that
we
published
we
spun
up
several
nodes
that
were
controlled
by
ourselves
and
we
tracked
down
what
happens
throughout
the
process.
A
A
No
one
else
has
retrieved
it
or
cached
it
or
pinned
it,
and
this
id
then
was
communicated
with
the
rest
of
the
nodes
that
we
were
controlling
and
then
we
we
started
requesting
that
cid
from
other
parts
of
the
network.
A
So
you
know
it
has
been
actually
the
whole
process
of
providing
content
and
then
coming
back
to
to
retrieve
the
content,
and
we
found
several
interesting
results,
such
as
a
summary
of
which
is
just
this
one,
and
you
see
in
the
leftmost
figure,
is
the
overall
retrieval
duration
in
the
rightmost
figure
is
when
you
actually
connect.
You
find
the
content
and
you
connect
and
like
the
line,
speed
kind
of
thing,
the
point-to-point
connection
where
you
transfer
content
and
the
middle
one
is
the
dht
walk.
A
Duration
right,
so
I'm
not
going
to
go
into
detail
I'll
point
you
to
places
where
you
can
read
much
more
about
this,
but
one
interesting
thing
that
you
can
observe
here
is
that
the
first
two,
the
leftmost
and
the
middle
figure
they're
exactly
the
same
they're
identical
it's
just
that
the
leftmost
is
shifted
by
one
second
right,
and
these
tells
us
that
the
bit
swap
process,
which
is
the
first
step
that
is
happening
when
we
try
to
retrieve
content
from
ipfs,
is
taking
this
whole
one.
A
Second,
whereas
the
dhd
walk
itself,
it
is
pretty
quick
right.
We
can
see
that
from
several
parts
of
the
of
the
world,
like
several
of
the
instances
that
we
had
the
retrieval
latency,
the
dht
walk.
Latency
is
less
than
a
second
right.
There
is
more
detail
here,
but
the
takeaway
point
is
that
you
know
the
detai
has
improved
quite
a
bit.
It
of
course,
goes
to
two
and
three
and
four
seconds
in
some
cases,
but
we
see
that
for
a
large
part
of
it,
it
can
be
kept
below.
A
One
second,
which
is
great
news,
is
great
performance.
Now
why
do
we
have
to
wait
for
this?
A
Extra
second
is
a
question
that
needs
to
be
answered,
and
basically
you
know
shifting
that
back
means
that
you
know
if
bitswap
does
not
manage
to
find
content
in
many
cases
you
know
we
are
just
waiting
for
one
second
and
increasing
basically
the
content,
the
dhc
well,
not
the
dhd
lookup,
but
the
general
retrieval
process
by
a
whole
second,
by
more
than
perhaps
a
hundred
percent
in
some
cases,
which
is
something
we
don't
want,
obviously
right
so
we're
now
working
on
identifying
what
should
we
do
with
with
the
bitswap
process?
A
Should
we
start
the
bitswap
discovery
together
with
the
dht
lookup,
so
that
these
two
steps
kind
of
progress
in
parallel?
A
So
this
is
the
second
opportunity,
and
what
I
want
to
highlight
here
is
that
there
is
a
lot
of
low
hanging
fruit,
that
if
you
start
looking
deeper
into
protocol
operation
and
performance,
you
can
you
know,
figure
them
out
and
design
easy
in
some
cases.
Easy,
not
always
easy
protocol
optimizations,
but
you
can.
You
can
find
other
things
as
well.
So
if
you,
by
having
measurements
that
run
continuously
on
the
network,
you
can
figure
out
what
is
the
agent
version
uptake?
A
You
can
figure
out
what
is
the
churn
rate
of
the
network,
the
overall
rate,
the
leftmost
figure
there,
the
agent
version
based
churn
rate
and
then
the
release.
So,
for
example,
you
can
figure
out
if
some
release
is
churning
too
much,
maybe
I
don't
know,
maybe
there
might
be
some
bug
or
something.
So
it's
good
for
monitoring
and
identification
of
bugs
and
stuff
like
that.
A
We
can
also
see
what
is
the
coverage
like
we
found
out
the
ipfs
peers
server
appears
are
found
in
more
than
2
700
ass
isps,
which
is
great
news.
It
means
that
you
know
there
is
lots
of
kind
of
diversity
in
terms
of
geography
and
all
that
we
can
see
concentration
as
well.
So
we
found
out
that
the
top
10
ascs
contain
where
ipfs
nodes
have
been
found,
contain
65
percent
of
the
ipfs,
sorry
of
ib
addresses
or
ipf
spears
right.
So
there
is
some
concentration
there
as
well.
A
It
doesn't
mean
that
it's
fully
distributed
the
the
system.
Equally
among
the
aesc
that
we
have
found.
A
You
can
also
see
what
is
the
cloud
provider
dependency,
so
you
would
expect
that
lots
of
dht
server
peers
are
deployed
in
big
cloud
providers,
which
is
not
the
case
as
it
turns
out.
We
have
found
that
it's
less
than
three
percent.
Actually,
that
is
of
ipfs
server,
appears
that
appear
in
the
big
cloud
providers
at
least
some
of
them.
A
We
cannot
really
know
if
it's
a
cloud
provider,
but
even
if
they
are
they're,
none
of
the
very
well
known
and
very
big
ones,
which
is
great
I
mean
for
kind
of
decentralization
purposes,
is
a
great
thing
to
know.
A
There
are
many
more
results
than
what
I
presented
here
and
it's
a
great
resource
to
go
and
figure
out
more
about
more
details
about
the
network.
We
have
also
documented
everything
that
we
have
done
so
far.
Well,
not
even
everything,
but
a
good
part
of
what
we
have
done
in
a
recent
paper
that
we
have
published
with
this
great
group
of
people
there
I'll
share
the
slides,
you
can
get
the
cid
it's
on
ipfs.
A
It
is
soon
going
to
be
in
the
acm
library,
as
well
as
open
access,
so
you're
going
to
be
able
to
find
it
from
there
as
well
highly
recommended
to
read
through
includes
a
nice
description
of
how
the
whole
system
works.
What
measurements
we
did
the
methodologies
and
all
the
details,
summary
of
which
I'm
talking
about
right
now
today.
A
So
with
that
in
mind,
we
want
to
go
on
and
build.
You
know
a
bigger
thing,
which
is
going
to
be
the
ipfs
network
observatory,
which
of
course,
doesn't
have
a
definition.
Like
you
don't
know
exactly
where
you
want
to
reach
you
can
always.
There
is
always
much
more
detail
that
you
can
go
into,
but
we
want
to
have
several
continuous
monitoring
and
measurement
processes
running
and
we're
not
doing
this
alone.
We
we're
doing
this
with
a
great
community
of
people
that
also
have
the
same
interest.
A
We
had
a
workshop
last
week
in
bologna
on
decentralized
internet
networks,
protocols
and
systems.
It
was,
it
was
great
lots
of
people
were
there.
We
had
several
keynote
talks:
ten
papers,
two
tutorials
three
demos
a
great
day
generally
so
yeah.
This
space
is
definitely
open
for
collaboration,
and
here
are
some
call
outs.
For
now
you
can
check
the
pro
blog
page,
which
currently
leaves
in
notion.
A
I
can
I'll
share
the
slides
and
you
can
link
through
there.
We
have
several
grants
open
specifically
on
network
measurements
that
you
can
find
on
the
radius
platform.
Again,
the
link
is
there
and
we
have
this
github
repository
where
much
of
the
action
happens
and
much
of
the
results
is
actually
are
published.
A
Now,
with
that,
let's
come
to
the
plan
for
the
day
we
have
several
talks
from
on
very
interesting
topics:
I'm
not
going
to
go
through
each
one
of
them.
A
The
speakers
are
going
to
explain
what
they're
going
to
be
talking
about,
but
we're
going
to
find
out
lots
of
things
about
whether
you
know
the
protocols
that
we're
all
using
and
building
and
improving
are
actually
you
know
performing
as
they
should
or
if
we
need
to
have
any
modification
optimization
and
any
recommendation
that
we
can
give
to
the
the
community
and
the
teams
developing.
A
A
Bring
up,
bring
up
topics
that
you
think
need
more
investigation,
I'll
I'll
share
a
document
where
we
can
have
it
as
a
parking
lot
to
put
notes,
pointers
and
even
questions.
If
you
don't
have
the,
if
you
don't
manage
to
ask
it,
we
can
discuss
it.
We
can
discuss
it
later,
so
this
is
going
to
go
up
until
briefly
after
lunch.
I
think,
unless
we're
faster-
and
we
finish
earlier-
you
know
it's
pretty
flexible
the
agenda
these
days,
then
the
the
rest
of
the
day
in
the
afternoon.
A
We
continue
keeping
notes
at
the
parking
lot
and
putting
our
thoughts
there
we're
going
to
have
a
number
of
breakout
sessions
from
about,
I
think
2
p.m,
to
5
p.m.
Roughly
the
topics
we
are
not
defined,
yet
we've
got
some
ideas,
but
it
should
be
ultimately
a
kind
of
team
exercise
and
we
can
vote
on
what
we
want
to
discuss.
A
We
can
have
the
breakout
sessions
here.
All
together
or
break
out
in
smaller
groups,
go
out
there
and
the
target
is
basically
to
have
come
up
with.
You
know,
design,
ideas
and
measurements
methodologies
that
we
can
then
go
and
apply.
You
know
build
and
apply
in
the
months
to
come.
A
So
I
would
like
to
see
this
as
a
kind
of
road
map
making
meeting
for
everyone
and
for
our
team,
of
course,
for
the
for
the
months
to
come
and,
of
course,
any
collaborations
that
we
can
build
with
any
of
you
so
yeah,
that's
it
for
from
me
any
questions
on
the
logistics
or
yeah.
Did
you
just
review
the
peer
train
in
the
version
of
stats?
A
Go
back
to
the
slide.
You
mean
yeah,
of
course,
yeah.
It's
not
the
most
recent
one.
If
you
go
to
this
page
and
you
scroll
all
the
way
down,
you'll
find
the
latest
one.
I
don't
even
know
when
that
figure
is
from
what
does
it
say?
It
doesn't
say
the
date.
It's
just
a
screenshot,
but
yeah
anyway
go
on.
A
Do
yeah
yeah,
so
the
hydra
nodes
were
there
we
didn't
kind
of
isolate
them
or
turn
them
off.
So
they
are
part
of
the
network,
so
they
are
included.
Yeah
in
these
results,
so
I'll
be
interested
to
see
how
these
numbers
would
look.
If
you
could
take
care
of
items
yeah,
because
you
shouldn't
have
to
either
yeah
yeah
yeah
you're,
not
alone,.
A
We
can
either
go
the
kind
of
hardcore
approach
and
just
turn
them
off
for
a
day
and
run
those
experiments
and
see
if
people
start
complaining
if
things
start
falling
apart,
but
is
there
a
more
graceful
way
of
doing
this
yeah?
I
think
a
more
graceful
way
of
like
a
smoother
way
of
doing
it
is
just
I
don't
know
is
it
you
can
just
ignore
any
response
that
you
get
from
nodes
that
have
the
hydra
agent
version
and
wait
for
you
know
other
nodes
who
respond.
A
I
don't
know
if
that's
a
hundred
percent
accurate
though
yeah,
no,
because
so
the
notes
that
you're
gonna
reach
up
that
are
not
hyper.
Nodes
are
gonna,
be
connected.
A
A
I
don't
know
the
exact
locations,
but
I'm
pretty
sure
they're
in
many
different
continents,
not
only
in
europe,
I'm
pretty
sure
about
that
yeah,
yeah
yeah.
So
the
only
way
I
can
explain
this
difference
and
the
better
performance
from
eu
based
kind
of
requests
that
we
did
is
that
geographically
is
kind
of
in
the
middle
of
everything
else
like
north
south
america,
middle
east,
asia
and
so
on.
A
So
in
a
sense,
no
matter
where
the
content
is
actually
stored
and
assuming
that
you've
got
nodes
all
over
the
place,
you
know
you
can
kind
of
probably
reach
them
faster
in
terms
of
like
speed
of
light
latency,
that's
an
explanation,
but
we
we
actually
don't
know
it's
one
of
the
things
that
we
want
to
dive
deeper,
to
figure
out
why
this
is
the
case.
A
All
right,
then
yeah
I'll
share
the
slides
and
a
document
to
keep
notes
feel
free
to
use
slack
as
well.
We've
got
a
slack
channel
track
dash
measuring
apfs,
so
we
can
discuss
there
as
well
and
ask
questions
yeah.