►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
My
name
is
Alex
from
team,
magmo
and
I'm
here
to
talk
about
moving
at
the
speed
of
nitro
use
case
for
test
ground.
That
is
my
attempt
at
a
clever
title,
to
talk
about
the
test,
ground
testing
tool
and
some
of
the
benefits
we've
gotten
from
it.
I'll
start
off
with
a
brief
outline,
so
I'm
going
to
give
a
brief
overview
of
our
go.
Nitro
framework
it'll
be
similar
to
what
George
covered
with
some
slightly
different
diagrams.
A
Maybe
a
few
ad
mistakes,
I'll
talk
about
the
test,
ground,
testing
tool
and
framework
and
how
we
use
it.
A
I'll
talk
about
some
of
the
metrics
we
capture
from
our
test
ground
tests
and
we'll
do
a
little
bit
of
a
performance
investigation
into
it,
chat
a
little
bit
about
how
we
can
mock
different
network
conditions
and
how
that
affects
our
metrics
and
I'll
talk
a
little
bit
about
how
we're
trying
to
contribute
back
to
the
test
ground
project
and
what's
next
for
test
ground
and
go
Nitro
and
then
I'll
leave
some
time
for
questions
at
the
end.
A
Okay.
So
how
does
go
Nitro
work
so
basically
retrieval
clients
and
providers
open
a
directly
funded
Channel
with
the
Hub
I
think
you
saw
that
in
Georgia's
presentation,
so
you
can
see
here
they
have
a
directly
funded
Channel
with
the
Hub.
They
do
this
by
calling
a
smart
contract,
a
deposit
method
which
locks
up
their
funds
on
chain.
A
A
Eventually,
the
client
and
provider
going
to
close
that
virtual
Channel
and
The
Ledger
channel
gets
updated
with
the
balances
and
then
when
the
provider
wants
to
withdraw
their
funds,
they'll
have
to
call
into
the
contract.
It's
called
the
withdrawal
method
on
the
contract
measuring
the
speed
of
State
channels.
Basically,
we
were
looking
for
a
tool
to
measure
performance
in
of
our
state
Channel
framework.
A
When
we
first
started
looking,
we
found
a
lot
of
tools
that
were
Performance
Tools
that
were
designed
around
the
server
client
model,
but
not
that
much
for,
like
the
peer-to-peer
distributed
world.
However
lucky
for
us
protocol
Labs
is
a
struggle
with
this
problem
and
built
a
really
awesome
tool
called
test
ground,
which
is
a
testing
framework
designed
for
like
testing
and
benchmarking
your
distributed
or
your
peer-to-peer
system,
and
it
was
originally
designed
to
test
and
measure
ipfs
and
the
P2P
code
bases.
A
So
what
is
test
Grant
basically
test
ground
is
a
platform
for
testing
benchmarking
and
simulating
complicated
distributed
or
peer-to-peer
systems.
It
lets
you
write
code
that
gets
spun
up
and
you
can
figure
a
configurable
amount
of
instances.
It
handles
all
the
hard
work
of
spinning
up
those
instances
and
also
provides
an
API
that
allows
those
instances
to
coordinate
and
figure
out
what
role
they're
going
to
play
in
a
test,
and
it
also
provides
a
lot
of
infrastructure
to
record
metric
performance,
metrics
and
then
report
on
that
which
is
super
handy.
A
So
how
does
test
ground
work?
It
runs
as
a
Daemon
listening
for
test
requests.
You
can
either
have
it
running
locally
or
like
we
have
it
running
on
a
cloud
VM
when
it
receives
a
request.
It
builds
your
test
code.
It
runs
the
code
in
a
configure
configurable
amount
of
Docker
instances.
I
have
a
little
star
there,
because
you
can
also
not
use
Docker,
but
that's
just
the
way
we
do
it.
A
So
how
do
we
use
it?
Well,
we've
written
a
test
case
for
test
ground
that
uses
our
go.
Nitro
client
to
make
payments.
That's
just
written,
go
messaging
between
instances
is
handled
with
a
lib
P2P
messaging
service
using
the
TCP
transport.
We,
like
I,
said
we
run
the
test
ground
Daemon
on
a
cloud
VM.
So
that
way
we
can
all
share
results
between
the
team
on
that
test.
Ground
VM.
We
also
have
a
Docker
instance
running
hard
hat
that
serves
as
our
blockchain
and
we
use
the
metrics
API.
A
So
what
happens
in
our
tests?
So
basically
our
go.
Nitro
test
has
configurable
amount
of
like
clients,
hubs
providers.
You
basically
can
specify
what
you
want
the
network
to
look
like
the
first
thing
that
happens
is
everyone
opens
a
funded
Ledger
Channel
with
the
Hub.
It's
not
showing
this
diagram,
but
that
just
lets
you
spin
up
virtual
channels
in
the
future.
A
Once
that's
done,
clients
start
selecting
random
providers
using
a
random
Hub.
They
open
a
payment
Channel,
as
you
can
see
here,
they
send
a
few
payments
on
that
payment
Channel
and
then
they
close
it,
and
then
we
repeat
this
for
the
duration
of
the
tests.
A
A
Another
really
cool
thing
is
that
we've
I've
integrated
this
into
our
CI.
So
whenever
we
create
a
pull
request,
we
actually
trigger
a
test
ground
run
s
that
run
completes
we
add
a
com
like
RCI,
adds
a
comment
to
the
pull
request,
and
on
that
comment
we
have
a
lot
of
neat
things
like
we
have
links
to
some
performance
dashboards.
A
We
have
a
link
to
logs
and
links
to
some
build
artifacts
there,
and
this
is
really
great
because
it
lets
us
make
a
change
and
measure
the
performance
impact
of
that
change,
almost
immediately
I
think
it
takes
our
tests.
Our
scenario
takes
me
two
minutes
to
run.
So
it's
a
really
quick
and
nice
feedback
loop,
so
measuring
performance.
How
do
we
do
that
with
test
ground
well
test
ground
provides
a
metrics
API.
It's
based
on
go
metrics
any
of
the
metrics.
A
You
do
record,
get
recorded
to
a
database
that
test
ground
is
responsible
for
spinning
up.
It
also
spins
up
a
grafana
Docker
instance.
This
just
lets
you
easily
create
dashboards
against
it
and
we
use
dependency
injection,
so
we
can
into
our
go
Nitro
client,
so
the
go
Nitro
client
itself
can
record
metrics.
A
What
do
we
want
to
measure
our
mean?
Benchmark
so
far
has
been
timed
to
first
payment.
That's
basically,
the
time
it
takes
to
open
a
virtual
Channel
and
send
the
payment
on
that
channel.
The
reason
we
want
to
look
at
this
is
one.
It
places
the
largest
load
on
the
Hub,
our
enemyiaries
in
the
system,
once
the
virtual
channels
are
open
and
payments
are
being
sent
through
the
hub's
not
even
involved
and
a
quick
time
to.
A
First
payment
is
really
important
for
user
experience
in
a
minimal
trust
world
because
you
don't
want
to
start
buying
service
until
you
receive
a
payment
so
before
I
jump
into
metrics
I'm
just
going
to
give
a
quick
overview
of
like
the
scenario
that
I'm
using
under
the
covers.
Basically,
we
have
eight
participants,
so
it'll
be
eight
Docker
instances.
Two
of
them
will
be
clients
or
payers.
One
is
a
we'll
serve
as
the
hopper
intermediary
and
then
we'll
have
five
payes
or
retrieval
providers.
A
The
duration
is
just
60
seconds
and
each
each
pair
will
try
to
maintain
three
open
Virtual
channels
for
the
duration
of
the
test,
so
they'll
be
constantly
opening
channels,
sending
payments,
closing
them
and
opening
them
again
so
that
we're
always
maintaining
three
open
virtual
channels
I'll
get
into
this
a
bit
more,
but
we
also
mimic
some
Network
conditions
right
now.
Our
scenario
uses
10,
milliseconds
latency
and
one
millisecond
Network
Jitter,
so
yeah.
What
does
the
result?
A
Look
like
this
is
our
time
to
first
payment
dashboard
that
we've
created
in
grafana-
and
this
is
just
a
run-
we've
done
so
yeah
on
the
very
top
here
I.
Just
we
just
have
some
nice
details.
We
have
the
unique
test
ground
run,
ID
test
grant
for
every
test
round,
run
you
get
a
nice
unique
ID
got
some
version
information
and
then
just
a
breakdown
of
like
what
roles
different
instances
are
playing
and
the
latency
information
you
can
see
here
right
now.
A
Our
time
to
first
payment
is
110
milliseconds,
which
is
slightly
above
the
number
George
gave,
but
we'll
get
into
that
a
bit
more
down
here.
We
just
have
it
broken
down
by
client.
We
can
also
notice
there's
a
bit
of
a
large
Spike
here,
so
we'll
kind
of
look
into
that
a
bit
more
too.
A
Another
thing
we
can
look
at
is
payment
throughput.
One
I
guess
there's
two
caveats
here
right
now.
This
is
just
measuring
payments
dispatch
like
in
the
real
world,
the
retrieval
providers
need
to
verify
the
payments,
and
the
other
thing
is:
we've
designed
our
test
scenario
around
testing
time
to
first
payment.
So
we
open
a
virtual
Channel,
send
like
a
couple
payments
and
then
close
it.
A
A
So
now
I'm
going
to
go
through
a
little
bit
of
a
like
a
performance
investigation
with
you
guys,
so
one
important
part
of
forms
is
obviously
messaging.
It's
gonna
play
a
huge
role
in
your
system
and
just
with
the
way
our
code
is
written.
Now
we
actually
block
on
sending
a
message.
A
So
I
was
kind
of
curious.
How
long
does
that
take
and
like?
What's
the
impact
on
performance
there,
so
the
really
awesome
thing
about
test
ground
is
that
it's
really
easy
to
like
figure
this
out.
So
on
the
bottom
left
there
we
have
this
little
helper
function
in
our
go:
Nitro,
client,
Library.
Basically
you
can
just
drop
that
into
a
function
and
commit
that
and
that's
going
to
start
recording
the
function.
Duration,
metrics
as
soon
as
that's
committed
and
then
once
that's
done.
We
can
easily
just
create
a
nice
dashboard.
A
You
can
see
here
to
kind
of
look
at
what's
happening,
so
why
don't
we
look
at
that
in
a
bit
more
detail,
so
you
can
see
here.
This
is
the
duration
to
send
a
message:
the
average
duration,
so
it
spikes
at
the
beginning,
there's
a
bit
of
plateau
where
we
there's
a
pause
in
the
test
and
then
it
spikes
up
again.
So
these
numbers
aren't
awful,
but
they're
certainly
significant.
A
So
what
if
we
didn't
block
on
sending
a
message?
What
if
we
just
made
a
simple
change,
just
to
use
a
go
routine
to
send
messages?
What
how
would
that
affect
performance?
A
A
A
We
accept
that
you
can
specify
the
amount
of
intermediaries
in
the
multi-hop
test
and
I
think
it
really
demonstrates
the
power
of
test
ground,
because
we
went
from
thinking
wondering
if
this
is
possible
to
call
in
implementing
it
to
verify
it
with
the
test
run
test
within
like
yeah
days
and
what's
the
multi-hop
result,
look
like
so
not
as
impressive.
We
haven't
done
too
much
work
really
investigating
or
looking
at
the
multi-hop
stuff.
Yet
I
think.
The
really
important
thing
for
us
is
that
it
doesn't
blow
up
awfully
and
yeah.
A
It
works
yeah.
So
another
great
thing
about
test
ground
is
that
it
lets
you
manipulate
the
network,
so
it
lets
you
mimic
different
network
conditions
for
our
tests.
We're
focusing
on
two
values:
latency
and
Jitter.
Latency
is
just
a
flat
amount
of
time
added
to
every
Network
request
and
then
Jitter
is
a
random
amount
of
time.
A
Right
now,
our
default
scenarios,
I
think,
is
10
milliseconds
latency,
one
millisecond,
Jitter
I,
think
I,
based
that
off
a
ping
to
a
server
that
was
pretty
close
by
and
Jitter
is
actually
a
very
important
thing,
because
it
allows
messages
to
be
received
in
different
orders.
We've
actually
discovered
a
lot
of
bugs
around
messages
arriving
in
an
unexpected
order
and
causing
things
to
go
wrong.
So
it's
really
good
for
battle
testing
our
framework
so
yeah.
What
does
it
look
like
if
we
try
to
do
really
slow
internet?
A
This
is
just
a
run
where
latency
of
a
second
generator
of
100
milliseconds.
You
can
see
our
software
doesn't
form
as
well
under
those
conditions,
but
it
still
functions
and
channels
payments
are
still
going
through,
which
is
great.
We
can
also
look
at
a
case
of
unreliable
internet
or
where
there's
just
a
lot
of
Jitter.
A
So
this
is
just
one
second
up
to
one
second
of
Jitter,
I
mean
see
much
better
than
the
ones
that
can
latency,
but
still
not
amazing,
but
it's
really
reassuring
that
we
can
run
this
test
and
have
all
these
messages
being
reordered
and
still
see
everything
succeed
and
then
I
guess
I
also
want
to
throw
in
the
super
nice
case
where
we
just
removed
the
any
network,
delays
and
yeah.
A
Things
are
super
fast
in
that
case,
which
is
not
too
surprising,
but
it's
also
nice
to
look
at
the
case
where
there's
a
no
real
like
not
too
much
Network
overhead
and
it's
mostly
our
framework,
and
that
will
let
us
like
figure
out
what
parts
of
our
framework
we
can
optimize
yeah
contributing
back.
A
So
we've
submitted
two
pull
requests
back
to
the
test
ground
repo
one
was
support
for
M1
Max,
which
is
great
because
that's
what
I
use
and
another
is
starting
up
a
allows
you
to
start
up
an
additional
Docker
container
when
you
run
the
test.
This
has
been
really
useful
for
us
because
we
can
start
up
our
little
blockchain
instance
and
use
that
and
can't
take
credit
for
any
of
this,
though,
because
both
these
PRS
were
actually
written
by
Hannah
Howard,
we
were
just
able
to
kind
of
push
them
over
the
finish
line.
A
A
We
really
want
to
start
performance
between
our
framework.
We've
just
started
getting
these
numbers
so
now
it's
time
to
start
making
them
lower.
We
want
to
look
at
testing
the
unhappy
path
when
we
support
that
in
our
code.
A
What
we
would
really
love
to
do
is
actually
have
a
test
ground
test
going
where
we
just
pick
a
random
instance
and
kill
it,
and
then
our
framework
should
allow
for
everyone
to
recover
from
that
and
like
George
mentioned,
we
also
want
to
run
a
test
run
test
with
fvm
and
we
did
have
that
working
like
George
said,
and
we
will
all
soon
I'm
sure
yeah
and
that's
it
for
me.
Any
questions.
B
So
I
was
wondering
if
you
could
walk
me
through
kind
of
the
the
security
in
the
off
chain
element
that
this
the
green
box,
that
was,
in
the
previous
conversation,
the
virtual
channel,
and
so
what
was
the
design
choice
for
doing
that?
In
you
know
the
things
that
are
coming
to
my
mind
are:
why
move
that
off
chain,
when
you
know,
would
it
be
more
trusted
to
be
on
chain?
B
A
So
George
is
probably
a
better
place
to
talk
about
this,
but
I
can
try
to
give
it
a
little
go
yeah.
So
first
of
all,
I
think
we're
all
are
all
our
code
is
open
source.
It's
not
as
proprietary.
The
big
thing
about
keeping
things
off
chain
means
you
don't
have
to
deal
with
like
the
delay
and
latency
of
dealing
with
the
blockchain.
So,
instead
of
like
trying
to
send
a
payment
on
the
blockchain
in
30
seconds
or
whatever
the
block
time
is
we're
able
to
get
it
down
to
like
100
milliseconds.
A
So
that's
that's.
Why
we've
moved
that
code
off
chains
for
performance,
okay,.
B
A
C
What
it
when
you
showcase
the
multi-hop
protocol
or
multihub
function,
does
that
multi-hap,
allow
you
to
multi-hap
between
different
currencies
and
assets,
or
is
that
just
within
hubs
within
the
virtual
channels
that
you've
created.
A
Yeah
so
I
think
the
strength.
Let
me
see
if
I
can
bring
it
up
here
of
the
multi-hop.
Is
it
let's
basically
join
different
Hub
networks?
A
So,
if
you're,
if
you're
only
operating
interacting
with
this
one
Hub,
you
would
only
be
able
to
open
in
the
single
hop
world,
you
can
only
open
payment
Channels
with
other
people
who
are
interacting
with
us
Hub
in
the
multi-hop
world,
even
though
the
retrieval
provider
is
using
a
different
Hub
just
because
these
hubs
have
a
ledger
Channel,
you
can
make
a
payment.