►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
My
talk
will
be
about
the
decentralized
nut
hole
punching
and
it's
especially
about
the
measuring
the
dc
utr
success
rate
dc
otr
stands
for
direct
connection
upgrade
through
relays,
and
I
will
get
to
this
in
a
second
in
this
talk.
I
will
briefly
describe
well
not
really
briefly,
but
I
will
describe
how
the
whole
punching
process
works
in
the
scope
of
the
peter
p
and
then
we'll
lie
out
how
we,
how
our
measurement
setup
looks
like
and
how
we
do,
how
we
want
to
measure
the
success
rate
of
these
hole
punches.
A
A
First
of
all,
big
shout
out
to
max
who
has
already
done
two
excellent
talks
on
whole
punching.
The
first
one
is
in
foster.
I
highly
recommend
that
one,
if
you
want
to
dive
really
deep
into
the
technical
details
and
in
the
protocols
and
so
on
and
another
one
in
in
paris
where
I
also
was
lucky
enough
to
be
part
of
this
talk
so
highly
recommend
both
of
them.
If
we
want
to
have
more
information
than
I'm
about
to
present.
A
So,
what's
the
motivation
behind
hole
punching,
we
want
to
have
full
connectivity
among
all
nodes
of
the
p2p
network,
despite
nets
and
firewalls,
and
the
requirements
for
us
are.
We
don't
want
to
have
any
centralized
infrastructure,
it
should
work
with
quick
and
tcp
and
also
maybe
webtrend
webrtc,
and
it
should
nicely
integrate
into
the
lib
p2p
stack
so
in
general,
what
are
nets
and
firewalls
just
a
brief
introduction.
A
So
nets
map
your
local
ip
address
to
a
public
ip
address
and
firewalls
in
general
guard
the
access
or
like
the
incoming
and
outgoing
network
traffic
from
your
home
network
from
your
local
network
to
the
internet
and
fireworks
usually
do
this
with
a
something
that
they
call
a
state
table
where
they
just
yeah
enter
a
new
row
that
takes
the
source
ip
address
and
source
port
and
maps
it
to
like
the
destination.
A
Ip
and
destination
port
usually
has
also
a
state
column
which
is
not
depicted
here
and
also
tracks
the
transport
that's
used
and
the
one
of
the
most
simple
or
that
one
of
the
simplest
rules
would
be
only
let
packets
in
if
a
previous,
previously
a
packet
left
the
network,
and
so
what's
the
problem
with
that
in
the
in
the
area
of
peer-to-peer
networks.
A
On
the
right
hand,
side
pra
sends
like,
in
the
in
the
case
of
a
tcp
connection,
is
syn
packet
to
connect
to
b,
and
this
packet
reaches
router
a
router,
a
updates,
its
own
state
table,
as
you
can
see
at
the
bottom,
with
the
internal
ip
address
from
appear
a
on
the
left
hand,
side,
the
internal
port,
and
it
keeps
track
of
the
destination
ip
and
destination
part,
and
this
packet
leaves
the
router
a
it
gets
routed
through
the
internet.
A
A
Hole
punching
would
work
as
follows.
Let's
imagine,
through
some
magical
mechanism,
both
piers
can
synchronize
a
simultaneous
connect,
and
this
means
again
in
the
case
of
the
tcp
connection,
both
issue
a
tsm,
a
syn
packet.
At
the
same
time,
both
routers
update
their
own
state
tables.
A
The
packets
get
routed
through
the
internet,
they
cross
path,
they
reach
the
other
routers
and
both
routers
look
in
their
own
state
tables
and
see.
Well,
I
just
sent
out
a
packet,
I'm
expecting
another
packet,
so
I
left
let
them
through
everything's
good
and
we
are
connected.
A
So,
as
I
said,
we
need
some
magical
mechanism
how
both
so
this
is
hole
punching
in
general
and
we
need
now
and
a
process
how
both
peers
can
synchronize.
So
how
do
they
synchronize
these
simultaneous
connects?
A
And
for
this
I
will
now
explain
the
dc
utr
protocol.
So
this
is
what
lipid2p
uses
to
facilitate
this,
this
whole
punch
mechanism,
and
for
this
we
need
a
relay
which
is
publicly
reachable,
and
so
this
works
as
follows:
plb,
on
the
right
hand,
side
first
of
all
detects
that
it's
behind
the
net
and
is
not
publicly
reachable.
It
connects
to
relay
and
requests
something.
That's
called
or
reserves
a
spot
on
this
on
this
relay.
A
The
relay
itself
is
just
another:
go
ip
like
another
kubo
node
and
since
kubo
0.11,
all
the
kubo
notes
are
so
called
limited
relays,
and
the
limited
relay
is
something
so
this
is,
as
I
said,
rolled
out
to
all
kubo
nodes
since
0.11,
and
this
means
that,
like
a
limited
relay,
only
allows
certain
amount
of
resources
to
be
shared,
and
in
this
case
I
think
it's
like
four
megabytes
of
bandwidth,
and
this
is
all
more
and
and
also
only
a
couple
of
protocols
are
allowed,
but
this
is
more
than
enough
to
facilitate
this
whole
punch
here.
A
So
pure
beef
searches
for
like
any
relay
any
real
limited
relay,
gets
a
reservation
at
this
relay.
That's
how
it's
called
and
also
then
has
some
a
multi
address
that
looks
like
this.
The
smart
address
is
passed
to
peer
a
through
some
other
means
or
pa
gets
gets
to
know
this.
This
other
multi
address
npra
also
connects
to
this
relay
and
asks
this
relay
hey.
Can
you
facili?
Can
you
forward
traffic
for
me
to
this
pb
and
with
in
this
case
right
now?
A
Both
peers
are
connected
through
this
relay
and
as
soon
as
p
notices
that
pier
a
has
connected
through
this
relay
pb
looks
at
the
multi-addresses
of
pra
and
tries
it
something
that's
called
a
connection
reversal
so
before
we
try
to
try
this
whole
punch,
we
actually
want
to
avoid
this
this
process
and
just
try
to
directly
connect,
and
only
if
this
fails
plb
actually
starts
with
this
dc
utr
protocol.
A
So
this
dc
utr
protocol
is
this:
is
a
stream
that's
opened
and
prb
issues
or
like
sends
out
the
first
message
of
this
protocol,
which
is
called
a
connect
message,
and
this
connect
message
contains
all
the
publicly
reachable
ip
addresses
of
pb
for
the
hole
punch
after
that
will
happen
afterwards.
A
So
pb
sends
out
this
connect
message
and
starts
a
timer
to
measure
the
round
trip
time
which
will
become
important
in
a
second.
This
connect
message
gets
routed
through
the
relay
ends
up
at
pier
a
pa,
receives
this
connect
message
and
sends
its
another
connect
message
back
this.
This
connect
message
in
turn
contains
the
multi
addresses
of
peer,
a
like
the
the
publicly
reachable
ones
like
public,
ips
and
por.
A
Any
of
these
events
have
occurred.
Both
start
this
whole
punch,
as
previously
they
try
to
connect
each
other
simultaneously.
The
routers
update
their
state
tables.
They
pass
paths
somewhere
in
the
internet,
not
depicted
here
right
now.
The
packets
pass
through
both
routers.
They
look
at
the
state
tables,
everything's,
fine
and
both
are
connected.
So
this
is
like
in
a
nutshell,
how
this
yeah,
how
this
whole
point
or
like
this
direct
connection,
upgrade
through
relay
protocol
works.
A
As
I
said,
max
has
a
more
thorough
talk
about
all
of
the
the
details
and
the
protocol
itself
and
also
the
limited
relay.
So
I
highly
recommend
to
look
at
this
at
one
of
his
talks
all
right,
so
this
is
now
deployed.
A
So
how
do
we
measure
it?
And
if
I
said
it's
deployed,
we
can
have
a
look
at
the
agent
version
uptake
janus
also
already
showed
one
of
these
graphs
earlier
today.
This
is
one
of
the
more
recent
time
frames
here,
so
we
have
the
on
the
x-axis
we
have
the
date
and
on
the
y-axis,
the
number
of
dht
server
peers
and
each
line
corresponds
to
one
of
these
most
recent
agent
versions
from
0
14
to
0
10.
A
and,
as
you
can
see
after
the
go,
it
still
says:
go
ipfs,
but
after
kubo
version
0.13
was
released.
There's
a
steady
increase
of
phd
server
appears,
and
this
roughly
corresponds
to
a
rate
of
one
new
dhd
server
period
per
hour,
and
our
first
idea
was:
we
could
just
use
this,
so
this
is
from
also
from
network
crawls.
We
could
just
use
these
network
rows
and
the
peers
that
we
detect
through
these
crawls
to
just
hole,
punch
them
every
yeah.
A
Every
time
we
detect
a
new
one,
we
can
just
like
try
to
hole,
punch
them
and
try
to
create
a
direct
connection.
However,
these
are,
as
I
said,
dh
server
piece
and
they
are
already
publicly
reachable,
and
so
we
need
a
way
to
find
peers
that
are
behind
nets,
which
is
actually
quite
hard
to
do,
because
they
we
cannot
reach
them
and
we
can
actually
not
see
them
from
the
internet,
and
for
this
we
have
another
setup
which
I
will
get
to
in
a
second.
We
have
this
repository
here.
A
A
Everything
that
I'm
about
to
present
is
there
also
the
analysis,
code
and
yeah,
so
the
components
for
our
measurement
setup
consist
of
a
honeypot,
a
server
and
a
set
of
clients,
and
I
think,
maybe
the
most
interesting
ones.
One
is
the
honey
pot.
As
I
said,
we
cannot
see
peers
behind
nets,
so
we
need
a
way
to
attract
peers
behind
nets,
to
connect
to
us
so
that
we
can
detect
them
and
can
try
to
via
facilitate
a
hole
punch
with
them.
A
So
these
honeypots
are
just
usual
dht
server,
peers
that
we
are
currently
running
on
on
one
of
our
servers.
They,
what
the
only
thing
that
they
do.
Is
they
announce
it's
themselves?
Well,
we
only
have
one,
so
this
honeypot
only
announces
itself
to
the
network,
and
it
does
that
by
walking
the
dht.
This
means
the
same
methodology
that
the
crawler
is
using.
A
So
it's
just
enumerating
the
dht
quite
slowly
and
it's
a
very
stable,
pier
and
the
hope
is
that
we
pass
by
every
dht
server
appear
in
the
network
and
since
we
are
stable,
we
get
like
the
honeypot
gets
inserted
into
the
routing
tables
of
these
other
remote
peers,
so
that
when
peers
behind
nets
want
to
interact
with
the
dht,
they
get
routed
to
our
honeypot.
And
this
way
we
can
over
time
get
to
know
a
lot
of
clients
that
are
behind
nets.
So
that's
the
idea
and
the
sunny
part
if
it
detects
a
inbound
connection.
A
A
So
you
could
query
the
server
for,
or
the
in
the
server
creates
the
database
for
these
peers
that
we
have
detected
that
are
behind
nets
and
after
we
have
done
a
hole,
punch
or
perform
the
whole
punch,
we
can
also
track
the
results
back
and
then
the
clients.
On
the
other
hand,
we
have
one
in
rust
and
one
and
go
and
another
shout
out
again
to
max
and
also
eleanor
who
is
working,
who
has
been
working
on
these
on
the
rust
implementation
of
these
clients.
A
It
tries
to
connect
to
to
these
peers
that
are
behind
nets,
and
if
this
was
successful
or
not
it
just
reports
back
the
the
outcome
of
this
whole
punch.
The
architecture
roughly
looks
like
this.
As
I
said,
the
honeypot
walks
the
dht
and
by
walking
the
adhd
we're
increasing
the
chance
that
pierce
dcgtr
capable
piers
behind
nets
connect
to
the
honeypot.
A
As
soon
as
we
get
the
inbound
connection,
it
saves
the
connection
to
the
database
and
the
server
just
exposes
the
data
from
the
database
to
the
clients
and
the
clients
actually
do
the
whole
punches
and
reports
back.
The
outcomes
did
I
forget
any
something
I
don't
think
so:
yeah,
okay
and
yeah
we're
monitoring
all
of
that.
Well,
we're
not
monitoring
the
clients
specifically,
but
the
honeypot
and
the
server
component
and
yeah
you
can
find
it's
publicly
accessible.
You
can
find
it
under
this
url
on
the
right
hand,
side.
A
This
is
just
just
a
the
most
up-to-date
view
on
how
everything
performs,
and
this
brings
me
then
to
some
of
the
first
results
but,
as
I
said,
keep
in
mind,
I'm
just
running
one
of
these
clients
in
my
home
network,
and
it's
probably
so
there
there's
much
much
more
work
to
do
to
get
a
more
comprehensive
view
of
the
network,
but
still
so.
A
This
shows
the
incoming
connections
per
hour
to
the
honeypot
right
now
and
if
we
are
really
brave,
we
can
also
put
a
linear
fit
through
this,
and
we
can
also
see
that
the
number
of
incoming
connections
increases
over
time,
which
is
expected,
as
0.13
is
rolled
out
into
the
network
and
as
you
can
see,
we
have
roughly
300
350
new
clients
per
hour
that
connect
to
to
the
honeypot,
which
we
can
then
use
to
yeah
to
try
to
try
to
hold
punch.
A
The
conditions
are
that
this
the
clients
are
support
the
dc
otr
protocol
and
that
they
actually
listen
on
a
relay.
So
these
are
the
two
conditions
for
for
that,
and
now
I
think
probably
the
most
interesting
graph
is.
How
successful
is
that?
As
I
said,
it's
just
running
on
my
own
personal
home
network
and
I'm
this
client
on
my
home
network
has
performed
roughly
thirteen
thousand
three
hundred
hole
punches
to
two
and
a
half
thousand
unique
peers.
So
every
pier
was
roughly
hole,
punched
five
times,
probably
and
yeah.
A
The
success
rate
is
around
72
percent.
Well,
28
have
failed,
which
is
not
as
good
as
max
says,
for
instance,
reported
in
the
foster
presentation
that
he
reported
a
success
rate
of
90
percent,
but
I
think
I'm
not
I'm
not
sure
about
the
measurement
setup
that
he
was
reporting
on
the,
but
I
think
that
it
was
a
little
bit
more
controlled
than
this
one.
A
This
is
actually
the
success
rate
to
like
random
peers
in
the
whole
ipfs
network,
and
I
think,
if
you
take
this
into
account,
this
is
already
a
pretty
good
success
rate
in
my
opinion,
but
yeah
we
can
talk
about
this
afterwards
and
I
think
also
another
interesting
result
already.
So
if
the
so,
we
are
trying
to
hole
punch
as
another
pier,
but
if
the
first
direct
connection
fails,
we
actually
try
it
a
second
time
and
the
third
time,
and
only
then
we
stop
this.
A
This
whole
punch
attempt
and
as
and
this
shows
this
graph
that
we
could
actually
stop
after
the
first
attempt.
So
in
97
of
the
cases,
the
first
attempt,
if
the
whole
punch
was
successful,
it
was
successful
within
the
first
attempt
and
only
two
and
point
three
percent
and
zero
point.
Four
percent
were
successful
with
the
second
try
or
the
third
try.
So
you
could
argue
that
if
it
doesn't
work
the
first
time
it
won't
work
the
second
or
third
time
either.
A
So
how
long
does
the
such
a
hole
punch
take
yeah
the
the
median
is
like
roughly
0.9
seconds
in
the
90th
percentile
1.7
seconds
or
99
percent
around
8
seconds.
I
just
put
it
here
because
I've
got
the
data,
probably
yeah,
but
yeah
this
this
accounts
to
the
to
the
connection
establishment.
A
So
this
could
become
a
yeah
interesting
if,
if
performance
becomes
a
problem
at
some
point-
and
also
this
is
a
quick
graph
that
I
created
yesterday,
which
takes
into
account
the
round
trip
time
measurement
and
how
successful
it
is,
depending
on
the
round
trip
time
measurement.
So
I
could
imagine
that
if
I
wanted
to
hole
punch
appear,
so
I'm
not
right
now
on
ice
and
if
I
wanted
to
hole
punch
another
pier
also
in
iceland,
but
the
relay
pier
is,
for
instance,
somewhere
in
australia.
A
So
I
just
looked
at
the
router
time
measurement
and
just
looked
at,
in
which
cases
this
whole
punch
was
successful
or
failed,
and
we
can
see
that
I
don't
think
it's
statistically
significant,
but
we
like
there's
a
tendency
of
if
the
router
time
is
a
little
higher,
then
the
chances
that
the
whole
punch
fails
is
also
a
little
higher
but
not
as
pronounced
as
I
would
expected
it
to
be,
and
with
that,
the
next
step
would
be
just
to
dig
deeper
into
the
data
we
want
to
have.
A
We
want
to
know
the
dependence
on
tcp
and
quick
connection,
upgrades
quick
hole
punches,
and
we
also
want
to
compare
the
go
and
rust
implementation
a
little
more
and
also
when
we
deploy
more
clients
to
the
network
or
well
to
to
home
networks.
We
also
want
to.
We
want
to
let
those
clients
to
hold
punches
each
other
so
that
we
have
the
control
about
both
parts
of
the
of
the
hole
punch
appears
and
we
as
yeah.
This
implies
that
we
want
to
deploy
the
clients
to
multiple
or
more
vantage
point
as
right.
A
A
You
will
receive
an
api
key
from
me
and
you
can
download
the
client
binaries
from
this
github
repository
that
I've
showed
earlier,
and
then
you
can
run
this,
for
instance,
on
your
raspberry
pi
at
home
or
somewhere
else
and
yeah
just
participate
in
this
measurement
campaign,
and
this
would
be
great
if
we
get
got
more
data.
A
This
form
will
ask
you
for
some
stuff,
like
what
router
are
you
using
and
what
the
network
topology
looks
like
if
you
have
something
to
tell
and
yeah,
because
this
whole
punch
success
rate
depends
a
lot
on
the
type
of
net
that's
using
and
that
the
router
is
using
so
having
information
about
that
will
help
us
later
on.
In
the
data
analysis,.
A
I
think
you
can
also
run
it
just
for
two
hours,
I'm
running
it
24
7
for
the
last
I
don't
know
nine
days
or
so,
but
yeah
you
or
even
if
you
just
can't
participate
a
couple
of
hours
worth
of
data.
I
think
it's
it's
worth
it
so
yeah
so
and
that's
it
I
think
yeah.
Thank
you
very
much.
B
When
you
say
that,
if
a
note
is
failing,
if
you
try
again
there's
a
very
low
percent
success
rate
yeah,
do
we
try
again
using
the
same
relay
or
do
we
try
again
using
a
different
one.
B
A
Right,
yeah
totally.
This
is
something
that
I
wanted
to
show
with
this
graph
that
so
so
this
this
would
be
true
if
the
assumption
holds
that
if
we
have
a
long,
rounded
time
that
the
success
rate
is
lower,
but
this
graph,
as
I
said,
implies
that,
but
it's
not
very
very
clear
that
it's
actually
the
case
yeah
yeah
yeah.
Definitely
we
could
try
yeah
totally.
C
C
Yeah
way
to,
I
don't
know,
make
that
measurement
more
more,
accurate,
perhaps
or
even
like
try
to
dive
deeper
into
you
know
in
these
failure
cases
it
has
been
another
convenient
discrepancy
between
the
equity
measurements
of
the
two
peers
and
when
they
started
sending
their
sim
buckets
and
that's
why
it
eventually
failed
right
and
obviously
the
longer
the
delay
the
further
away.
The
numbers
are.
C
A
I
was
also
about
to
say
that
maybe
it's
also
the
opposite.
I
don't
know
the
thing
with
these
attempts
is
that
each
attempt
has
its
own
round-trip
time
measurement,
so
we've
already
got
three
round-trip
time
measurements,
so
the
first
attack
with
the
first
attempt
we
are
doing
this
connect,
connect,
sync
thing
and
got
one
round
trip
time
measurement,
and
then
we
try
a
hole
punch.
A
If
this
doesn't
work,
we
do
the
whole
thing
again:
connect
connect
run
to
time
measurement
and
yeah,
and
it
it
seems
like
if
it
doesn't
work
the
first
time
it
won't
work.
The
other
two
times
and
e
for
measuring
the
rtc
between.
A
Then
but
then
okay,
so
then
I
know
the
the
ping
to
the
router,
but
then
I
need
to
to
inform
the
other
peer
when
the
other
player
should
start
the.
It
should
start
the
connect
process
and
this
this
and
this
again
needs
to
go
through
the
relay.
Then
yeah.
A
Yeah
my
mind
has
to
go
through
yeah,
but
I
mean
yeah,
even
if
I
knew
the
the
ping
to
this
to
the
router.
I
don't
think
this
will
bring.
This
will
help
me
much
because,
because
of
the
synchronization,
I
still
need
to
so
how
long
should
I
wait
for
the
sync
package
then
to
reach
the
other
peer
again
so.
B
So
you
know
the
the
rtt
from
router
a
to
router
b
through
the
relay
through
the
relay
no
or
yes.
So
you
know
the
rtt
from
a
to
b
or
h
to
the
relay
relate
to
b
right
and
a
to
b.
And
so
you
can
do
easy.
Arithmetic.
A
To
so
how
should
I
know
the
the
delay
from
router
a
to
the
relay
in
this
case,
router.
A
I
mean
this
is
all
like
from
the
perspective
of
b.
In
this
case
b,
can
measure
the
the
latency
to
the
relay
and
to
the
router
router
a.
D
So
with
the
the
kubo
13
update,
is
this
only
realized
when
the
node
behind
the
firewall
or
the
gnat
is
on
13
or
did
both
piers?
Does
everybody
have
to
be
on
13.
A
But
both
both
parties
needs
to
be
need
to
be
in
13,
because
both
need
to
support
the
the
st
it's
slash.
The
p2p,
slash
dc
utr
protocol
and
both
both
peers
need
to
support
this
protocol.
For
this
to
work.
This.
C
A
A
And
it
was
about
exactly-
and
this
was
rolled
out
with
0.11
already,
so
this
is
already
around
for
quite
some
time
yeah.
Thank
you.
D
Mean
yeah,
I'm
just
curious
if
you
guys
have
any
like
next
targets,
you're
trying
to
hit
here
or
you
know
what
could
cause
those
to
fail.
A
Yeah
exactly
so,
this
is
the
these
would
be
the
next
steps
just
to
to
find
out
why
these
fail
and
like
there's
also,
I
think
what
you
honest,
ask
some
failure
reasons,
and
I
can
imagine
that
so
some
some
nets
actually
don't
allow
hole
punching
like
if
you
have,
if
you're,
behind
a
symmetric
net,
for
instance,
this
won't
work.
And
yes,
some
other
reasons,
as
we
just
discussed,
could
be
the
rtt
measurement,
which
is
not
not
quite
accurate
and.
C
A
I'm
not
not
running
it
only
from
my
home
and
and
extrapolating
to
the
whole
network
would
also
be
something
that
yeah
could
change
these
results,
and
but
I
mean,
I
think,
also
janice.
You
also
said
there's
a
couple
times.
This
could
be
a
game
changer
being
able
to
connect
directly
to
peers
behind
nets
in
the
in
the
peer-to-peer
world
and
already
having
72
from
my
from
my
network.
A
A
A
That's
not
shown
here
so
this
measures,
the
time
from
the
opening,
the
dc,
otr
stream
to
the
actual,
successful
hole,
punch
or
like
actual
direct
connection.
A
If,
if
I
take
a
look
at
the
times
from
having
a
connection
through
the
relay
until
we
actually
have
upgraded
the
connection
to
a
direct
connection
through
a
hole
punch,
we
can
see
that
this
graph
basically
starts
just
after
five
seconds,
which
means
that
something
that
I
just
mentioned
briefly
earlier.
A
When
pier
a
connects,
oh
hang
on
here,
we
go
so
pra
connects
to
peer
b
through
this
relay
and
pb
notices
that
and
immediately
starts
to
directly
connect
to
a
without
starting.
This
whole
punching
process,
because
before
we
try
to
just
peer
b,
just
tries
to
directly
connect
to
peer
a
with
the
publicly
reachable
ip
addresses
and
then
eventually
times
out,
and
I
think
for
the
tcp
and
quick
transport.
A
The
timeouts
are
five
seconds,
and
only
after
this
time
out,
ump
starts
this
whole
process
by
opening
this
dc
otr
stream
and
sending
the
connect
messages
and
so
on,
and
this
is
also
something
that
I
saw
in
yeah
in
the
data
which
is
just
not
oh,
sorry
not
depicted
here,
hang
on
and
which
could
be
something
that,
instead
of
waiting
for
this
direct
connection,
to
succeed
or
or
fail.
In
this
case,
we
could
also
just
start
the
whole
punch
directly
and
wait
and
raise
raise.
A
A
B
A
I
hope
I
got
this
correct,
but
I
think
so,
some
of
the
stuff
that
we
want
to
analyze
is
the
tcp
and
quick
success
rate,
because
in
quick
you're,
sending
udp
packets
and
nuts
and
firewalls
behave
differently
by
with
updating
the
state
tables.
A
So
there's
certain
so
well,
not
certainly
because
we
have
not
measured
it,
but
I
can
imagine
that
there's
a
a
difference
in
success
rates,
depending
on
whether
you
use
tcp
and
quick,
and
this
is
something
that
we
want
to
analyze,
definitely
and
also
webrtc-
and
I
don't
know
web
transport,
maybe
as
well
yeah.
A
A
On
it's,
it's
not
yet
published,
I'm
happy
to
do
if
you're
interested
in
so
the
these
components.
So
the
database
itself
is
just
a
postgres
database,
so
the
data
is
highly
normalized
and
the
schema
is
all
on
on
the
github
repository.
A
The
data
itself
is
just
on
the
server
that
we
are
running,
and
so,
if
you're
interested
I'm
happy
to
dump
the
data-
and
I
think
if
we
want
to
package
this
up
into
like
a
publication,
we
would
also
like,
in
this
into
a
scientific
publication,
would
also
publish
the
data
sets
as
well.
So,
but
it's
not
only
on
github
right
now,.
A
You
mean
how
many
of
these
limited
relays
are
in
the
network
already.
This
is
roughly
visible
in
this
graph.
So,
as
jeropo
said
already,
since
0.11
we've
got
this
limited
relay
capability
and
I
can
imagine,
if
you
add
up
the
numbers
from
0
11
to
zero
field
14,
you
get
a
pretty
accurate
number
of
how
many
relay
servers
there
are
in
the
network
already-
and
this
goes
up
to.
I
don't
know
a
couple
of
a
few
thousand-
probably
maybe
two
or
three
thousand
four
roughly
like
that.