►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So,
hey
everyone,
I'm,
TiVo,
gonna,
present
more
or
less
how
we
like
monitor
like
a
the
class
ipfs
Gateway,
which
is
slightly
different,
I.
Think
from
like
the
talk
that
has
been
this
morning,
which
we're
more
focused
on
measurements.
A
Of
course,
monitoring
involves
some
measurements
with
like
more
targeted
like
operation
and
how
like
we
can
like
best
like
see
the
house
of
like
our
active
system,
depending
on,
like
the
health
of
like
the
overlap,
PFS,
Network
or
more
generated
this
kind
of
presentation,
so
yeah
as
I,
say:
I'm
TiVo,
I'm,
a
research
engineer
at
cloudflare
I
work
mostly
on
distributed
web
project,
ipfs,
ethereum
and
samosas
tend
to
like
croissant
I
mean
there
were
no
croissants.
This.
B
A
But
then
I
did
not
know
that,
but
like
I
was
writing
the
slide,
but
definitely
like.
If
there
are
some
personal
happy
to
like
meet
you
next
to
the
croissant,
like
always
fun
discussion,
The
Talk
today,
I
will
like
present
an
overview
like
a
high
level
overview
of
like
what
class
the
system
looks
like
then
I'm
gonna
present.
A
What
is
monitoring
like
what
are
the
different
systems
we're
using
and
then,
where
we
think
like
there
could
be
some
improvement
where
we
go
next
I'll
try
to
be
short
so
that
like
they
can
also
be
time
for
questioning
to
move
like.
A
Should
it
be
starting
the
experiment
based
on
like
this
infrastructure
or
like
questions
like
more
generally
so
class
architecture,
which
is
like
kind
of
the
place
we
go
home
at
la
flat
because
well,
that's
where
you
are
so
yeah,
it
operates
an
ipfs
Gateway,
which
is
an
HTTP
interface
to
the
ipfs
network.
A
More
generally,
if
you
like,
take
like
the
the
eyeballs,
which
would
be
your
end,
client
would
make
an
HTTP
connection
to
like
classifs.com
or
like
another
domain,
reaching
out
like
lifeless
Edge,
so
cloudflare's
Edge
is
composed
of
like
200,
maybe
more
like
point
of
presence
where
we
have
like
our
metal.
A
That's
where
we
operate
like
some
cloudflare
workers,
some
caching,
and
then
what
we
like
try
to
like
basically
like
what
makes
like
the
Gateway
fast
in
terms
of
like
we
cache
the
content
as
close
as
we
can
to
the
users
and
like
try
to
have
like
most
of
the
logic
there.
A
If
the
content
is
not
in
the
cache,
then
we
reach
out
to
like
ipfs
nodes
that
we're
running
in
the
background,
and
these
nodes
are
actually
the
one
that
are
in
contact
with
the
ipfs
network.
It's
not
Kubo
nodes,
but
it's
like
heavily
based
on
the
historical
go
ipfs
as
a
library,
so
there's
like
custom
setting
for
the
nodes
should
be
liking
to
move
appearing.
The
data
store
some
of
the
strategy
around
like
what
kind
of
verification
we
do.
A
For
instance,
we
like
always
perform
like
some
like
Genesect,
even
on
the
client,
so
like,
like
there's
some
strategy
strategy
for
at
each
of
these
steps.
We
would
like
to
inform
and
know
some
metrics,
and
we
would
like
to
to
better
understand
like
like
how
they
work,
and
so
that
kind
of,
like
two
types
ads
of
data
degeneration
that,
like
there's,
active
monitoring
that
we
perform
with
the
ipfs,
monitor
and
then
there's
like
more
passive
monitoring
that
is
generated,
as
request
comes
through.
A
Of
course,
the
ipfs
monitor
also
generates
some
like
requests
which
go
to
our
data
store,
but
not
on,
like
the
request
going
to
the
data
store
into
like
these,
various
stores
are
generated
for,
like
the
traffic,
we'll
go
in
more
detail
into
like
how
we
use
like
each
of
these
system
and
this
data
store.
A
A
For
instance,
you
can
have
like
a
chain
of
service,
so,
for
instance,
instead
of
like
accessing
cloudflakefs.com,
you
may
have
actually
like
a
cash
Gateway
in
front
which,
for
instance,
protocol
Labs
operating
with
nft
storage,
then
the
same
Gateway
may
also
be
chained
with
like
another
customer,
and
so
one
of
the
challenge
is
like
trying
to
understand
when
we
have
an
issue
find
the
root
cause,
what
the
pass,
what
it's
going
to
and
what's
even
more
interesting,
is
like
each
of
these,
of
these
Services
may
have
like
their
own
caching
strategies,
their
own
invalidation
and
trying
to
find
a
good
view
through
that
could
sometimes
be
challenging
yeah.
A
The
main
thing
is
like
kind
of
the
attitude
when
trying
to
debug.
What
do
you
look
to?
What
do
you
look
for?
How
do
you
understand
the
data
and
the
leg
be
those
graphs?
A
So
if
we
take
like
a
look
at
like
the
monitoring,
which
is
kind
of
like
I,
mean
mandatory
from
like
an
operational
perspective,
because
you
need
to
understand
your
gateway
start
to
answer,
I
mean
10
seconds
instead
of
like
less
than
a
second,
then
like
you
need
to
understand
like
what's
happening
or
like.
If
there's
like
new
things
into
like
concentrating
Etc,
you
need
to
be
like
alerted
and
have
the
tools
to
to
debug
that
live.
A
So
the
first
one
I
mentioned
was
like
Prometheus
or
premises
time
database,
one
of
the
kind
of
like
particularities
like
you,
can
have
like
only
low
cardinality
on
Prometheus.
So
you
don't.
You
tend
not
to
have
a
lot
of
information
like
in,
like
tagging
in
context
available
around,
but
it's
very
useful
because,
like
you
have
like
you,
can
like
easily
follow
and
track
service
on
time.
A
It's
limited
to
numbers,
while
you
can
have
like
a
huge
set
of
numbers,
I
think
it's
limited
on
numbers
and
like
we
allowed
like
for
live
monitoring.
So
an
example
like
take
it
like
what
a
Prometheus
metrics
looks
like
you
would
have
like
a
name
and
there's
tagging.
So,
like
five
PFS,
you
may
want
to
know
like
what
the
total
number
of
requests
that
have
been
received
on
on
the
on
the
back
end.
You
like
can
tag
it.
A
For
instance,
like
yeah
I
have
like
a
kubernetes
namespace,
the
Handler
which,
like
what
the
Gateway
and
the
type
of
method
that
has
been
received,
and
all
that
like
allows
you
like,
then,
to
perform
some
computation
over
to
get
like
those
like
beautiful
graphs
on
grafana.
A
You
have
a
very
set
of
metrics
that
you
can
have
Prometheus
So
like
Kubo.
Definitely
like
exposes
a
lot
of
that,
but
you
can
like
also
very
easily
like
customize
and
add
like
metric.
Should
you
be
like
for
timing
or
or
forenses
through
that?
A
Another
thing
which
is
like
quite
interesting
for
for
following
what's
happening
and
for
monitoring
a
Sentry,
so
different
compared
to
Prometheus,
is
like
Century,
like.
Usually,
we
tend
to
focus
on
errors,
because
it's
great
at
like
giving
a
lot
of
context
and
like
a
lot
of
metadata
around,
like
particular
events
that
happens
and
the
good
thing
with
centuries
like
you
can
also
like
map
it
to
like
the
actual
code
that
has
been
generated.
A
The
error
and
so
I
tends
to
give
a
lot
of
context
in
terms
of
like
when
you
have
an
error.
You
like
already
know
like
how
many
times
this
error
has
been
happening
and
like
which
piece
of
code
is
actually
responsible
for
generating
that
this
error.
It
doesn't
give
you
like
time
in
context,
so
it
doesn't
give
you
more
High
School
they're,
like
full
trades,
but
like
it's
very
good
to
pinpoint
on
the
instrumented
code,
where,
actually
you
have
an
error
generated,
it
may
be
more
Global.
A
Finally,
the
last
thing
I
want
to
mention
we
like
using
clickhouse
a
lot,
so
clickhouse
is
a
more
it's
probably
like
out
of
all
those
stores
and
more
traditional
database.
So
it's
a
column
based
database
which
allow
us
to
like
store
arbitrary
data.
A
This
data
is
also
sampled
so
like
that
allows
when
you
have
like
an
event
that,
like
is
repeated,
a
lot
to
just
not
store
this
event
like
as
many
times
as
it
happens,
it's
very
useful
for
like
these
kind
of
arbitrary
data,
so,
for
instance
like
if,
like
you
want
to
know
which
CID
is
being
accessed
the
most
or
like
for
like
which,
from
which
domain,
which
kind
of
Ip
you
like
which
country
Etc.
A
A
Basically,
this
is
also
the
same
backing
platform
analytics.
So,
in
terms
of
like
the
logic
from
like
when
we
started
like
operating
the
service,
it
like
method
of
logic,
because
we
could
already
leverage
the
existing
infrastructure
without
having
like
to
manually
operate,
our
own,
like
cluster,
should
be
like
postgres
or
clickhouse.
Cluster
first
thing,
I
wanted
to
mention
is
like
the
ipfs
monitor.
So
all
the
previous
tools,
I
was
mentioning
were
about
passive
monitoring.
A
It's
like
they
do
receive
data
and
then
like
we
try
to
build
query
and
build
model
on
top
of
them.
To
understand.
What's
happening,
ipfs
monitor
is
a
bit
of
a
different
thing.
It's
like
we
started
to
see.
A
lot
of,
like
sync,
could
happen
on
ipfs,
but
we
didn't
have
a
good
tool
to
actually
like
measure
life.
How,
like
these
scenarios
were
performing
so
I,
don't
know
like
the
publication
of
like
an
ipns
name
if
the
ipns
name
has
been
cached.
A
If
it's
not
cached,
the
publication
of
the
CID
is
the
access
to
a
DNS
link,
and
so
basically
the
ipfs
monitor
is
really
like
this
kind
of
like
active
probing
in
a
black
box
monitoring
way,
so
that
you
can
test
like
these.
Multiple
scenarios
export
them
through
Prometheus,
and
so
like
you're,
also
able
like
to
track
them,
live
some
of
the
results
we
got
in,
which,
like
are
like
family
described
like
on
the
blog.
We
have
for
like
the
newly
created
content
and
in
Cache
content
and
unavailable
cids.
A
A
Now
that
we
have
all
that,
there's
definitely
like
a
lot
in
terms
like
models
things
we
could
like
already
improve
on
that.
But
the
main
question
is
like:
where
do
we
go
next
and
sismo
kind
of
a
like
less
of
things,
rather
than
like
precise,
precise
step?
A
One
of
the
main
things
like
we
would
like
to
do
and
like
we'd
like
to
see,
is
more
like
long-term,
like
long-term
rating
of
the
peers,
we're
connected
to,
and
also
like
tracking
of
these
peer
I.
Think
like
at
the
moment
like
when
you're,
using
Kubo
like
when
you're
using
like
go
ipfs.
A
You
have
like
all
these
like
geologic,
which
is
kind
of
like
more
abstracted
and
there's
no
like
easy,
well
streamlined
way
to
actually
like,
say:
okay,
like
I,
want
to
like
keep
a
snapshot
of
my
peers
like
over
time
to
understand
like
which
one
are
actually
reliable,
have
more
control
in
term
of
like
the
latency
I
think,
as
Gil
mentioned
this
morning,
it's
almost
like
the
actual
response,
or
the
bandwidths
are
already
requested
from
them.
There's
been
a
lot
of
tooling
that
has
been
done
like
on
BitTorrent.
A
A
Similarly,
content
providing
and
concentrating
is
like
an
area
for
improvement
and
also
I.
Think,
like
the
the
this
workshop
and
the
work
being
done.
To
that
extent,
to
actually
like
measure
and
understand,
like
What's,
Happening
Here
for
different
strategy
is
important.
A
Something
that's
less
mentioned
usually
for
ipfs
is
like,
but
more
of
a
problem
for
Gateway
is
like
the
names
and
the
resolution
that
are
associated
because
ipfs
content
is
not
only
accessed
through
a
hash,
but,
like
you,
tend
at
least
for
Gateway
to
access
through
a
DNS
name.
Actually,
an
ipns
domain,
even
accent
like
through
ens,
for
instance,
and,
like
all
these
resolution
have
cost.
A
They
do
have
like
different,
like
standards
in
terms
of
like
the
TTL,
in
terms
of
like
the
way
to
organize
the
content
and
so
being
able
like
to
track
and
standardize.
This
small
would
be
quite
interesting
as
well.
A
Another
area
to
actually
like
measure
is
like
is
actually
like.
The
content
layout
like
on
ipfs,
you
have
like
these
cids,
which,
like
each
of
them,
has
may
have
a
different
layout.
A
Actually,
like
is
there
some
overlap
to
like
better
inform
like
the
production
of
content,
so
that,
like
you
actually
like
Leverage,
the
fact
that,
like
you,
can
share
blocks
for
multiple
content
and
multiple
cids
bits
of
natureing,
that's
included
by
default,
like
in
Kubo,
is
rather
primitive
and
so
having
more
more
View
and
vision
into
that
would
be
quite
nice,
I've
seen,
there's
been
like
some
work
with
the
new
clients
that
are
being
pulled
out
to
to
actually
like
extend
these
bits
of
monitoring.
A
A
On
the
another
note
distributed
tracing
to
like
actually
improve
advanced
level
of
the
Gateway
would
definitely
be
an
improvement.
It's
start
to
be
integrated
into
Cuba,
actually
to
some
extent.
Basically,
the
idea
is
like
when
you
do
a
request.
You
actually
understand
like
where
it
goes
and
where
it
spends
time
at
the
moment
like
in
Kubo,
it's
kind
of
a
no-low,
nothing
from
what
I've
seen
it's
like.
A
You
can
like
start
it
like,
really
and
refocus
on
debugging
so
like
if
you
like,
coding
like
developing
Kubo
and
like
you,
want
to
debug,
you
can
actually
like
start
like
a
Jager
instance
and
have
the
traces
distributed,
but
ideally
you
would
also
like
to
operate
that
like
in
a
production
setting,
so
you
would
like
on
your
production
gateway
to
pass
like
some
kind
of
like
on
some
kind
of
header
or
some
kind
of
setting,
so
that
you're
able
like
to
trace
the
the
request
for
like
your
whole
system.
A
That's
something
we
would
have
a
cloudflare
for
most
of
our
infrastructure
like
we
know
how
much
time
we
spend
in
the
cache.
We
know
how
much
time
it
takes
to
like
do
a
DNS
resolution,
but
also
gaining
and
like
pairings,
that,
with
the
visibility
we
could
have
in
Cuba
and
go
ipfs
would
be
quite
interesting.
A
A
It's
been
at
least
like
for
operators
like
one
of
the
main
challenges
in
like
one
of
the
main
area
where,
like
there's
been
time
spent,
and
so
definitely
like
concerting
these
effort
and
like
thinking
more
more
about
it,
I
think
is
one
like
important
step
to
go
to
should
be
like
for
measurements,
because,
like
I
mean
the
more
people
you
have
involved
in
the
measurements
like
the
better,
the
measurement
is,
or
at
least
like
the
more
data
you
have
in
that
you
can
filter
and
understand
if
it's
better
and
and
also
like
on
those
areas
for
for
these
challenges
and
yeah,
that's
it.
A
B
A
So
basically,
ipfs
node
is
based
on
Kubo.
Now
Kubo
is
a
library,
even
though
Kubo
is
meant
to
be
a
node
so
like
they
probably
like
some
naming
them,
but
yeah,
basically
like
we.
We
have.
We
just
go
ipfs's
Library,
so
that,
like
we
can
have
like
a
custom,
caring
strategy.
We
have
also
we
we're
creating,
like
certain
like
internal
kubernetes
cluster
to
cloudflare,
which,
like
we
are
not
comfortable,
exposing
the
IP
of,
and
so
we
have
like,
like
a
forward
proxy
for
that.
A
We
have
like
a
custom
data
store
implementation
so
that
all
our
nodes
can
actually
share
the
same
data
store
in
the
back
end.
So,
like
you,
also
reduce
the
number
of
like
requests
you're
making
to
the
outer
ipfs
Network.
A
This
use
of,
like
Kubo
is
a
library
also
like
very
helpful
to
like
introduce
new
features
should
be
like
for
research,
for
instance,
like
all
the
DNS
stack,
verification
validation
is
something
that
just
doesn't
exist
in
Cuba
and
so
like
that's
something
we
we
add
on
top
should
be,
to
like
add
the
caching
headers,
which
now
have
been
upstreamed,
I.
Think
and
just
like
very
strange.
This
is
things
through
that
is
there
something
in
particular,
you
were
interested
or
okay,
yeah.
B
A
So
not
at
the
moment,
but
definitely
that's
something
like
we
are
looking
to
do
at
least
like
first
of
all,
like
evaluating
what
is
This
Global
set
of
peer,
we
should
peer
to
I
think
it's
already
like
an
important
problem,
because
you
don't
know
which
p
is
actually
good
and
you
don't
know
how
you
should
update
this
thing
like,
for
instance,
like
ethereum,
has
been
doing
like
this.
A
Is
some
of
this
look
who's
like
the
East
disco,
I'm
single,
like
you
can
actually
like
just
let
the
demon
run
and
it
will
just
like
export.
You
like
a
set
of
PS
with
like
against
a
certain
like
like
rating
thing
for
ipfs
that
doesn't
exist
yet.
But
yes,
that
should
be
the
case.
A
B
B
A
Says
I
thought
something
we're
doing
at
the
moment
or
something
we're
thinking
about.
But
re
like
we
need
to
consolidate,
like
the
like,
concentrating
that
we
have
in
order
like
to
actually
like
properly
provide
like
this
content
reliably
to
the
network
because,
like
overloading
our
infrastructure.
B
A
So,
like
Gateway
have
kind
of
like
two
interfaces
which,
like
they
can
get
reached
on,
there's
like
the
like
bit
swap
interface,
which,
like
we
just
blocked
so
like
it's,
not
an
issue
here
on
the
browser,
is
definitely
your
challenge
and
especially
been
one
of
like
our
main
challenge.
Initially,
because,
like
the
like,
Gateway
was
like
out
as
a
research
product
on
like
cloudflare
ipfs.com,
we've
received
a
fair
share
of
like
interesting
requests
and
so
like
devising
strategies
to
like
prevent
that.
A
It's
been
kind
of
like
a
like
evolving
saying,
like
one
of
the
first
things
we
tried
to
do
was
like
to
do
like
rate
limiting
to
like
be
a
bit
more
like
fair
in
terms
of
like
how
the
content
is
distributed,
but
you
also
see
to
like.
Not
all
requests
are
created
equal,
for
instance,
if
you
have
like
you,
want
to
do
a
streaming
service
on
ipfs
and
like
using
a
gateway.
A
Well
may
not
something
you
want.
We
wanted
out
of
like
a
public
Gateway
to
serve,
and
so
like.
It's
really
like
up
to
like
the
public
Gateway
created
like
to
decide
how
that
goes.
A
That's
also
why,
for
like,
like
team
like
now
that,
like
ipfs,
is
more
of
a
product
at
cloudflare
like
we're
better
able
to
provide
the
tools
to
customers
in
order
for
them
to
Define
like
what
their
team
is,
a
good
content
or
not,
but
they
are
the
one
like
to
actually
pay
for
that,
so
that
yeah.
That
really
depends.
B
You
mentioned
fighting
your
IP
addresses.
A
B
A
Yep,
so
we
like
basically
we're
leveraging
like
the
class
like
TCP,
offering
I
think
it
got
renamed.
Historically,
it's
been
named
spectrum,
and
so
we
leveraging
Spectrum.
You
know
the
full
like
mitigating
this,
like
TCP
activity.
B
A
Okay
yeah,
so
200
is
like
really
on
the
left.
Yeah,
but
like
it's
like
personal,
like
cutting
the
y-axis
like
just
okay,.
C
C
A
C
A
I
think
it's
not
specifically
like
so
it's
more
like
the
default
metrics
exposed
by
logical
ipfs
for
bit
swap
are
rather
limited
and
do
not
allow
for
like
a
an
easy
like
an
easy
debugging
like
you
can
see
on,
like
the
number
of
like
active
requests,
you
can
see
like
how
many
you've
sent
Etc,
but
I
mean,
as
you
mentioned
like
you,
don't
have
like
these
buckets
about,
like
the
size
of
the
file
or
like
just
just
have
like
extend
like
the
variety
of
metrics
that
are
available.
C
A
Yes,
so
the
efficiency
of
content
layout
is
not
just
from
things
like
specific
to
the
Gateway,
but,
like
more
generally,
you
have
like
all
these
data
on
ipfs,
which
are
like
divided
like
in
blocks,
which,
like
can
be
overlapped,
and
so
the
question
is,
is
like
likely
a
lot
of
data,
but
like?
Is
it
actually
like
efficiently
like
partitions,
so
that
you
can
like
you
actually
share
blocks
between
like
multiple
cids
and
that
I?
Don't
know?
It's
not
that
important
I
mean
it's
a
bit
important
for
the
gateways.
A
You
can
minimize
the
cache,
but
it's
like
even
more
important
for
like
from
a
provider
perspective
where
you
actually
like
want
when
you're
constructing
the
dag.
On
top
of
your
data,
you
would
like
to
inform
I
mean
one
of
the
strategy
could
be
to
inform
the
way
you
build
the
dag,
based
on
like
the
overall
content.
You
have
so
you
actually
like
benefiting
from
like
this.
This
content,
layer,
structure,
yeah
and
last.
C
A
So
most
likely
not
on
all
of
these,
but
like
for
some
of
these
yes
I
mean
they
are
like
so
much
like
where
actively
like
working
on
I
would
say,
like
the
name
resolution,
something
like
where
actually
working
on
the
measuring,
like
long-term
peer
rating
is
something
like
we
also
like
considering
distributed
tracing
and
like
integration
with
Enterprise
monitoring
is
more
of
a
like
ongoing
effort,
because
that's
what
we
need
for
our
operation,
but
we
don't
know
yet
how
much
it
benefits
all
this
good.
Thank
you.