►
From YouTube: Unconf 1: Parts of the IPFS Stack that could do with more monitoring - @guseggert - Measuring IPFS
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah,
so
this,
hopefully
this
is
more
interactive
than
me
just
talking
at
you
because
we're
getting
into
like
the
uncomf
section
of
it
yeah.
I
think
we
want
to
have
like
a
conversation
about
like
what
do
we
need?
What
kind
of
measurements
do
we
need
to
like,
improve
on
or
add
to
the
ipfs
network
or
like?
What
are
we
missing
or
are
there
questions
about
how
we
can
implement
certain
kind
of
measurements?
A
So
I
wanted
to
seed
it
a
little
bit
with
like.
Why
are
we
measuring
stuff
in
general,
because
there's
a
lot
of
different
reasons
that
we
measure
things
so
yeah,
I'm
gus
my
github
handles
gus
eckert.
I
am
on
the
protocol
labs
ipfs
steward,
so
I
work
on
kubo
and
the
ipfs
protocols.
A
My
background
is
like
I
came
to
pl
from
aws,
where
I
worked
on
web2
services,
and
before
that
I
did
a
b
testing,
so
it
was
like
a
lot
of
thinking
about
metrics
and
how
to
how
to
make
sense
of
them,
and
mobile
experimentation
in
particular
is
interesting
because
it's
a
lot
like
the
problems
are
very
similar
to
the
problems
we
have
in
peer-to-peer
networks
where,
like
nodes,
come
and
go
randomly
and
those
kind
of
problems
so
yeah.
A
Why
are
we
measuring
stuff,
like
one
of
the
one
of
the
big
reasons
we
measure
things,
because
we
want
to
know
that
things
work
and
when
they
stop
working
right,
so
we
can
catch
bugs.
A
Also,
like
the
I
found
that,
like
the
process
of
defining
what
working
means
is
actually
it
can
be
pretty
difficult
and
being
able
to
like
once
you
figure
out
what
working
means
being
able
to
observe
it
can
be
even
more
difficult,
especially
in
these
peer-to-peer
systems
like,
for
example,
in
the
hydro
boosters
one
of
the
big
problems
with
them.
A
Is
we
don't
know
how
to
tell
objectively
like
if
they're
doing
their
job,
which,
like
is
like
to
help
out
the
network,
basically
the
dht
network
and
then,
like
reality,
turns
out
to
be
pretty
surprising,
sometimes
like
a
lot
of
times?
We
think
that
reality
is
one
way,
but
then,
when
we
look
at
it,
it
turns
out
to
be
pretty
different
from
what
we
expected.
A
It's
like
having
a
little
bit
of
humbleness
about
about
we're
working
on
or
the
data
can.
Actually
you.
B
A
Imbue
humility
in
us
we're
really
bad
at
predicting
the
future,
like
einstein
didn't
think
that
nuclear
energy
was
feasible
and
also
like,
like
external
events,
happen
that
nobody
can
predict
like
covid,
that
dramatically
change
people's
behavior,
and
if
we
have
the
data
there
to
already
understand
what's
happening,
we
can
adapt
pretty
quickly
to
it
and
there's
also
particularly
ipfs
there's
a
lot
of
constants
like
in,
for
example,
k
buckets.
A
I
think
dennis
was
talking
about
this
yesterday
and
yeah
like
assumptions
that
led
to
those
or
maybe
there
were
no
assumptions-
I
don't
know,
but
they
can
change
over
time,
and
so,
if
we're
watching
the
data
that
led
to
those
assumptions
we'll
know
over
time
when
we
need
to
adjust
constants
as
a
developer,
this
one
rings
true
to
me:
you're,
always
like
ramping
up
people.
A
People
come
and
go
working
on
these
problems,
and
if
you
have
the
data
for
them
to
look
at,
they
can
ramp
up
a
lot
more
quickly
rather
than
having
all
the
domain
knowledge
in
your
head.
A
Emergent
behavior,
especially
in
these
peer-to-peer
systems,
can
be
hard
to
predict
because
we
have
like
these
simple
algorithms,
but
then,
when
they
get
together
in
swarms
of
nodes
and
stuff,
they
can
behave
in
unpredictable
ways
like
trying
to
figure
out
what
kind
of
metric
we
would
use
to
detect
this
feedback
loop
here
would
actually
be
pretty
interesting.
A
Oh
yeah
and
then
also
to
keep
us
honest
on
what
we're
working
on
so
that
we
don't
end
up
working
on
problems
that
nobody
cares
about,
but
that
we
think
are
particularly
interesting.
A
Yeah-
and
so
I
also,
I
also
feel
strongly
that
sometimes
metrics
are
not
enough,
because
they
they
discard
information
by
design
and
sometimes
the
information
they're,
discarding
you
care
about,
and
so
like.
You
also
need
to
verify
that
your
your
metrics
are
the
right
metrics
by
looking
at
the
information,
you're
discarding
and
there's
a
lot
of
ways
to
do
that.
Right,
like
we
can
look
at
individual
cases
and
logs
and
randomly
sampling
and
looking
at
probability
distributions
and
like,
for
example,
this
guy
on
twitter
was
complaining
about
a
bimodal
distribution.
A
He
was
seeing
when
he
was
pinging,
probably
google
or
something
can
anyone
figure
out
why
there
would
be
a
biblical
distribution
here,
or
does
anyone
have
a
hypothesis
about
why
your
pings
would
be
bimodal
like
this,
so
it
turned
out
at
noon
on
wednesday
his
his
latency
went
down
just
a
little
bit
and
and
so
like
by
looking
at
this
this
graph
here
you
get
by.
Instead,
if
you
were
just
looking
at
a
number
like,
you
would
never
see
this
information.
This
is
a
lot
of
information
right.
A
Something
happened
at
exactly
noon
and
then
exactly
noon
the
next
day.
It
looks
like
the
variance
might
have
increased
or
something-
and
so
just
by
like,
like
breaking
out
of
just
the
metrics
bubble
and
looking
at
graphs
and
distributions
and
stuff
like
that,
we
can
get
all
kinds
of
interesting
information
about
how
the
network's
behaving
so
yeah.
What
do
we
already
measure?
We
already
measure
quite
a
bit
and
it's
and
a
lot
of
different
people
measure.
It
too,
I
think,
dennis,
has
already
shown
the
nebula
crawler.
A
A
You
know
we
get
interesting,
interesting
behavioral
patterns
like
one-off
nodes
that
appear
and
then
disappear
immediately,
churn,
which
is
these
are
probably
some
of
the
most
interesting
graphs
in
this
in
this
report
and
then
there's
a
lot
of
like
geographic
distribution
or
is
it
yeah
where
you
can
see
like
what
countries
nodes
are
are
coming
from.
A
A
Now,
for
example,
our
we,
we
run
the
ipfs.io
gateways
as
well,
and
so
we
get
a
bunch
of
great
metrics
from
that
if
it
loads
like,
for
example,
want
list
sizes
from
bitswap
all
these
time
to
first
bite
metrics
from
nginx
and
the
list
goes
on
and
on
and
on
latencies
and
errors
and
timeouts,
and
because
ipfs.io
is
so
popular,
it
can
be
a
pretty
good
window
into
the
network.
A
We
also
collect
a
lot
of
logs
and
have
elastic
search
dashboards.
I
couldn't
get
this
one
to
quite
load,
but
you
can
see
like,
for
example,
we
can
break
down.
A
What
was
that
it
does?
I
don't
know
why
some
of
these
are
erroring
out,
maybe
the
interval's
too
high
or
something.
But
someone
needs
to
fix
these,
but
I
mean
just
this
graph
alone
is
pretty
interesting.
Like
you
get
the
devices
that
that
are
requesting
data
from
the
gateways.
A
It
might
take
a
while
to
load
but
yeah.
We
have
this
system
that
that
does
dht
queries
and
puts
and
stuff
and
measures
things
like,
and
we
get
an
overall
view
of
how
the
dht
is
performing.
A
So
you
can
see
the
process
like
a
ton
of
requests
with.
This
is
somewhat
like,
like
we
can
get
a
good
idea
of
how
much
traffic's
flowing
around
in
the
network
just
from
this
graph
alone.
If
we
look
at
the
database,
we
can
see
like.
Actually,
this
is
interesting.
A
In
the
last
day,
there
was
like
a
huge
surge
by
there
was
like
a
billion
new
records
that
were
put
into
the
dht,
so
we
can
get
like
a
good
global
view
of
the
number
of
records
floating
around
in
the
dht,
and
then
the
number
of
records
that
expire
and
get
evicted.
A
A
Like
unique
nodes
and
their
their
times
in
their
countries
as
well,
and
then
this
is
a
pretty
good
website
as
well
by
this
group
called
trudy.
Oh
wait!
Not
this
one!
This
one
and
I'll
send
these
slides
out
after
afterwards,
so
that
y'all
can
get
these
links.
A
A
And
then
some
examples
of
stuff
we
might
want
right,
there's
a
there's,
a
really
awesome,
github
repo
that
has
requests
for
measurements
in
it.
So
if
you
all
have
ideas
or
needs
for
measurements
that
you
want
that
you
want
to
get
prioritized,
you
can
submit
an
rfm
here
and
describe
like
what
the
measurement
is
and
how
you
would
measure
it,
and
then
I
believe
there's
grants
right.
There's
bounties
on
these
for
people
to
implement
as
well.
A
A
Yeah
some
some
things
that
I
have
like
like
like
was
just
mentioned
in
the
last
talk
right,
like
bitswap,
is
sorely
missing:
lots
of
metrics
like
the
number
of
cancels
and
how
successful
is
the
discovery
process
when
bitswap
broadcasts
peers
out
right,
latency
house,
like
how
useful
is
it
that
bitswap?
A
How
useful
are
the
bitswap
sessions,
I
think,
is
an
interesting
question
to
ask,
because
we
don't
have
any
data
on
those
at
all.
I
don't
think
leeches
in
the
network.
I
think
we
were
just
talking
earlier
about
the
idea
of
collecting
data
from
network
nodes,
like
maybe
each
network
node
could
have
a
us
a
little
server
that
responds
with
some
metrics
and
then
we
could
just
crawl
the
network
and
get
metrics
from
every
node
about
what
their
experience
looks
like
on
the
network
yeah.
A
A
A
Well,
I
think
if
other
people
don't
know,
I
think
the
hydras
have
an
interesting
behavior,
that
not
a
lot
of
people
realize
I
think,
and
that
if
they
get
a
request
for
a
record
that
they
don't
have
they'll
return
to
the
caller,
that
they
don't
have
it.
But
they'll
also
start
a
background
process
to
go,
find
it,
and
so
that
can
create
some
extra
work
on
the
dht.
A
A
But
if
there's
a
lot
of
like
unfindable
records
right,
then
it
could
just
be
creating
a
lot
of
unnecessary
work.
C
I'd
like
to
hear
from
people
about
you
know
the
nodes
themselves
emitting
more
stats
and
us
being
able
to
query
the
network.
You
know
live
and
like
what
people
think
about
that.
As
far
as
one
security
to
like
performance.
A
C
B
A
D
It
is
an
issue
when
you're
a
provider
like
that
for
that
to
work,
you
need
those
stats
to
be
available
from
outside,
where,
typically,
your
knowledge
are
not
accessible.
That
way.
What
if
you
trying
not
to
because
that
access
to
infrastructure,
is
it
bad.
C
D
E
D
C
E
No,
no
definitely
I've
been
tracing
within
the
nose
has
been
something
we
thought
about.
The
there
are
a
few
things
there.
One
is
what
would
be
useful?
The
second
is,
what
percentage
of
that
would
make
sense
to
share
with
other
nodes.
E
Yeah
exactly,
and
also
what
is
not
kind
of
touching
on
any
privacy
issued
by
notes
by
you
know,
sharing
your
stuff
with
others.
Yeah
definitely
activity
on
the
network.
But
I
guess
there
are
maybe
halftime
and.
A
E
I
think,
in
order
to
do
this
exercise,
it
would
be
perhaps
good
starting
from
the
aussie
side
to
say
what
do
we
want
to
use
these
metrics
for
all
right?
So
do
we
want
to
use
it
to
have
better
ideas
here
within
table
entries,
or
do
we
want
it
in
order
to
choose
peers
that
are
more
closely
geographically
closer
to
us
to
connect.
A
Or
request
content
from,
is
it
like
what
is
it
and
then
we
see
as
a
as
a
kubo
developer?
I'm
really
interested
in
like
what
people's
experiences
using
software
that
I
work
on
and
like.
I
don't
have
a
good
understanding
of
that,
because
we
don't
really
collect
that
kind
of
it's
like
annoying
like
gathering
information
from
all
the
notes
about
like
have
they
been
having
a
good
experience
right,
if
not
like
why
music
experience.
C
F
A
C
Yeah,
what
does
user
experience
mean?
We
should
define
that
for
sure
yeah,
but
for
feature
development.
Be
honest
to
answer
your
question
I
think
is
for
for
me,
as
you
know,
I'd
be
stewards,
team,
member,
yeah,
working
on
desktop
and
pokey
yeah
like
what
how
do
I
know
that,
for
example,
one
question
you
know
we
have.
I
just
looked
at
this
yesterday
about
a
thousand
active
users.
You
know
unique
users
per
day
from
ipfs
desktop
right,
okay.
So
how
do
I
know
if
those.
C
Hurting
things
you
know
like:
okay,.
C
By
being
another
node
on
the
network
to
provide
content,
potentially
caching
content
that
they
are
looking
up,
maybe
they're
long
time,
maybe
we
have
one
to
five
desktop
users
that
are.
C
D
D
C
A
C
A
C
C
E
Item
in
request
for
metrics
yeah
yeah,
it
is
absolutely
yes,
anyone
can
go
and
start
a
pr
or
start
an
issue
and
discuss
ideas
that
they
have
that
we
think
are
going
to
be
useful
and
yeah.
If
you
go
to
each
rfm.
Basically,
there
is
a
short
description
and
in
most
cases,
not
all
of
the
cases,
but
in
most
cases
there
is
a
short
description
of
the
methodology
as
well
like.
E
How
would
you
go
about
doing
it
or
you
know
suggestion
or
thoughts
and
stuff
like
that,
and
once
that
becomes
kind
of
priority,
then
we
need
to
work
harder
to
come
up
with
a
methodology
if
we
don't
already
have
it
and
actually
yeah.
So,
of
course,
you're
more
than
welcome
to
start
any
number
of
these
excite
yeah.
E
Of
all
these
things
that
you
just
mentioned,
you
haven't
documented
somewhere.
C
F
F
Strange
so
we
we
have
like
a
specific
use
cases
like
queries
like
using
some
part
of
the
database,
that
we
knew
that
we
had
some
problems
before
so
we
executed
that
queries
on
all
the
all
of
the
vectors
of
the
database.
So
we
can
easily
see
some
spikes
on
the
respect
if
we
did
something
really
wrong
or
some
cpu
times.
That's
where,
like
totally
different
from
the
previous
version,
or
we
can
clearly
see
if
some
improvements
that
we
implemented,
they
actually
improved.
F
A
Environments
that
receive
they
get
like
mirrored
traffic
from
production.
Basically,
the
production
servers
will
just
forward
copies
of
all
their
requests
off
to
some
posts
that
you
can
just
do
random
stuff
too.
They
don't
actually
responses,
don't
go
back
to
anyone,
but
that's
only
for
gateway
stuff
right.
A
We'll
put
it
on
a
gateway,
node
and
watch
the
metrics
and
make
sure
okay,
so
this
is
done
somehow
very
primitively.
Okay,
like
that's,
not
a
good,
that's
not
a
good
way
to
experiment
with
tweaks
to
network
protocols
and
stuff
right
yeah.
You
want
like
a
little
microcosm
of
an
ipfs
network
and
observe
how
the
nodes
behave
right.
A
A
G
Just
one
on
the
big
swap
efficacy,
basically
yeah,
but
what
I
was
wondering
is
instead
of
generating
a
bunch
of
cids
and
sending
them
out
is
looking
at
the
want
lists
that
are
out
there
in
the
wild
and
then
seeing
how
long
those
cids
stay
on
the
llamas
in
the
you
know,
kind
of,
like
certain
polling,
appears
it's
a
little
rude.
I
suppose,
but
it
would
be
really
interesting
to
see
in
practice
how
long
something
stays
on
a
certain
peers
want
list
or
a
sample
of
those
yeah.
E
G
G
Yeah
just
pulling
for
a
little
while
and
see
if
you
know
if
it
shrinks
if
it
goes
up
or
down
or
whatever,
and
then
that
might
be
an
indicator
of
a
user
experience.
G
G
B
I
think
the
important
thing
here
is
that
we
have
this
discussion
publicly
in
this
kind
of
repositories.
Yes,
like
now
that
we
all
in
this
room
know
that
this
did
have
repository
exist.
Like
I
mean
you
need
to
measure
what
people
want
to
be
measured
like
we,
we
can
have
just
a
close
idea
of
what
needs
to
be
measured,
but
it's
also
interesting
to
know
like
what
other
people
think
that
is.