►
From YouTube: Testing Blockchain Projects and Decentralized Applications with Testground with Anton Evangelatov
Description
Join us for Filecoin Liftoff Week, an action-packed series of talks, workshops, and panels curated by the web3 community to celebrate the Filecoin mainnet launch and chart the network’s future. https://liftoff.filecoin.io/
Events take place all week, October 19-23, 2020. #FilecoinLiftoff
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
A
Okay,
hi
everyone.
My
name
is
anton
van
gerwath
and
I'm
an
engineer
at
protocol
labs.
I
hope
you
have
a
fun
time
watching
the
presentation
so
far,
they've
been
great
yeah
if
you've
missed
them,
yeah
make
sure
that
you
check
them
on
youtube
after
the
dev
talks
is
over.
So
today
I'm
gonna
going
to
talk
to
you
about
the
test
ground
and
how
we
used
it
to
test
lotus
implementation
of
falcon
prior
to
mynet.
A
So,
first
of
all,
what
is
test
ground
so
test
ground
is
a
platform
for
testing
and
benchmarking
and
simulating
distributed
peer-to-peer
systems.
It's
designed
to
be
multi-lingual
and
scale
gracefully,
and
in
the
past,
we've
used
it
to
test
various
protocols
and
systems
such
as
ipfs,
gossip
sub
and
others.
That
ground
is
a
relatively
new
project.
A
A
A
It
supports
multiple
builders
and
runners.
What
does
that
mean?
Basically,
you
can
build
tests
with,
for
example,
the
go
programming,
language
and
javascript
is
in
the
works,
and
you
can
run
them
locally
as
processes
on
the
host
system
as
docker
containers,
or
you
can
run
your
test
plans
remotely
on
a
kubernetes
cluster.
At
the
moment
there
is
work
undergoing
to
support
other
languages.
A
Telegram
has
a
distributed
coordination
api,
backed
by
redis,
which
provides
easy
to
use
synchronization
primitives
such
as
barriers
and
locks,
and
allows
for
very
easy
coordination
between
multiple
test
plan.
Instances
test
ground
also
supports
complex
test,
manifests,
we
call
them
compositions.
A
They
allow
you
to
run
a
test
composed
of
different
versions
of
your
code,
for
example,
building
a
network
with
various
versions
based
on
various
dependencies.
A
Let's
go
over.
Some
of
the
core
test
ground
concepts,
so
that
you
understand
the
later
part
of
this
talk
and
how
we
actually
used
test
ground
to
test
lotus
testground
has
a
notion
of
test
plans,
as
I
mentioned.
So
these
are
the
tests
that
you
implement.
Then
you
run
on
the
test
ground
platform
and
they
can
have
from
two
up
to
ten
thousand
instances.
A
Currently,
we
support
mostly
the
go
programming
language
with
support
for
javascript
coming
soon,
runners
define
where
your
test
plan
will
be
run
on
your
local
machine
for
fast
iteration
and
feedback
loop
or
remotely
on
a
kubernetes
cluster.
If
you
have
requirements
for
more
resources
or
instances,
then
your
local
machine
can
accommodate.
A
You
are
free
to
implement
any
other
runners
that
you
find
necessary,
even
if
we
don't
support
them
so
you're,
not
locked
only
with
let's
say
docker
and
kubernetes
test
plan
instances
coordinate
with
each
other
through
the
synchronization
service
that
I
mentioned
earlier.
So
this
allows
you
to
write
one
program
or,
as
we
call
it
test
plan
that
is
run
end
times
with
n
instances
for
the
engineers
among
you.
If
you've
participated
in
distributed
code,
jam
competition
from
google
test
ground
synchronization
works.
A
Similarly,
so
basically
the
same
program
is
run
multiple
times
and,
depending
on
a
sequence,
number
that
your
individual
instance
gets
or
depending
on
the
group
it
receives
from
the
run
environment.
It
can
execute
different
code.
But
ultimately
you
are
writing
one
and
the
same
program,
which
makes
it
very
easy
to
develop
test
ground
has
an
observability
pipeline
which
allows
tesla
to
emit
outputs
and
metrics
for
analysis
after
the
test
run
is
complete
and
I'm
going
to
show
you
an
example
of
that.
A
Later
and
last
but
not
least,
the
sidecar
is
an
individual
test
ground
process,
that's
responsible
for
network
management
and
traffic
shaping
for
tesla
instances.
It
runs
in
privileged
modes
on
the
host
machine
and
listens
for
requests
from
teslan
instances
for
network
configuration
through
the
sync
service,
so
test
lens.
A
test
plan
instances
can
request
specific
network
configuration
from
the
sidecar
and
it
can
amend
the
network
options
for
them,
be
it
latency
or
bandwidth
or
anything
else.
A
A
A
A
Enter
project
only
project
only
was
incubated
in
june
2020.
A
The
work
that
we
did
as
part
of
the
project
can
roughly
be
split
in
three
different
areas,
so
test
plans
that
are
exercising
lotus
on
test
ground,
interoperable
vm
conformance
test
vectors.
So
these
are
extremely
important
and
are
already
adopted
by
multiple
falcon
implementations
such
as
lotus
and
the
chainsafe
forest
implementation
rule
had
a
full
talk
on
that
earlier
today,
so
make
sure
that
you
watch
it
in
case
you
missed
it
and
state
dev.
A
The
two
that
compares
the
stage
trees
will
is
the
main
person
behind
state
div
and
has
a
presentation
later
today
so
definitely
check
that
out
as
well
to
learn
how
to
inspect
various
state
differences
in
the
state
tree
produced
by
falcoin.
I'm
going
to
talk
about
the
test
lens.
We
wrote
for
test
ground
that
exercised
lotus,
which
is
the
first
part
from
this
slide.
A
A
So
in
project
only
we
wanted
to
test
lotus
programmatically,
meaning
import
the
lotus
libraries
in
our
test
plans,
so
that
we
have
more
control
over
the
internal
apis
and
configure
falcon
networks
in
a
way,
that's
not
really
possible
if
you
run
vanilla,
lotus,
nodes
or
storage
miner
binaries,
while
this
certainly
worked
well
for
us,
the
downside
is
that
these
tests
are
completely
tailored
to
lotus
and
they
are
not
interoperable
in
nature,
unlike
the
vm
conformance
tests.
A
So
if
you
want
to
reproduce
those
results
with,
let's
say
a
different
falcon
implementation,
you
would
have
to
amend
the
tests.
If
it's
a
go
implementation,
it
will
be
easier.
If
it's
a
different
language
implementation,
you
would
have
to
rewrite
some
of
it.
A
A
You
can
also
attach
a
custom
direct
network
to
the
falcon
network
that
you're
starting,
so
the
test
lines
that
we
implemented
cover
a
number
of
topics.
Slashing
conditions
is
one
of
them.
We
recreated
scenarios
that
lead
to
slashing
so,
for
example,
window
proof
of
space-time
misses
and
sector
proving
votes
on
falcon.
Every
miner
should
prove
that
they
are
storing
sectors
they
have
sued
and
those
sectors
give
them
power
in
the
network
and
they
should
submit
proofs
on
chain
within
their
respective
deadline
periodically
on
a
daily
basis.
A
A
So
we
triggered
scenarios
that
exercise
this
behavior
and
verified
that
fees
for
the
so-called
temporary
faults
and
sector
termination
are
enforced.
We
ran
a
number
of
end-to-end
storage
and
retrieval
deals
with
a
variety
of
configs,
including
stress
testing.
The
network.
A
We
run
payment
channel
stress
tests,
so,
unlike
storage
deals
on
falcon,
retrieval
deals
are
fulfilled
of
chain
using
payment
channels
to
incrementally
pay
for
data
received,
so
payment
channels
were
exercising
their
own
tests,
but
they
were
also
exercised
in
the
deals
end-to-end
test
as
well
and
last
but
not
least,
we
triggered
the
various
deer
ant
incidents.
So
we
stopped
the
direct
network
attached
to
our
test
filecoin
network,
obviously
not
the
production
one,
because
we
don't
control
it
and
we
you
wouldn't
want
to
do
that
with
the
production
network.
A
So
during
the
test
runs,
we
used
audi
available
tools
from
lotus
such
as
this
nice
grafana
dashboard
that
comes
from
the
falcon
team,
and
we
can
visualize
different
parameters
of
our
test
networks.
For
example,
you
can
see
on
this
dashboard
that
we're
using
a
block
delay
of
two
seconds
instead
of
30
seconds,
and
the
reason
for
that
is
that
we
don't
want
to
wait
for
hours
to
perform
simple
tests.
A
You
can
also
see
the
power
of
each
miner.
We
could
visualize
the
exact
time
when
miners
were
slashed
and
when
they
were
losing
power
and
yeah.
Basically,
this
dashboard
is
similar
to
the
one
at
stats.falcon.io
and
it's
very
handy
to
developers
when
running
test
tests
on
the
on
a
test
falcon
network.
So
the
data
visualized
here
was
extracted
from
one
of
the
miners
in
the
test
ground
test
plan
and
pushed
to
influx
db,
part
again
of
the
test
ground
platform.
A
Additionally,
all
the
lotus
cli
tools
were
also
available
to
us
when
testing
and
running
test
networks.
With
our
test
plan,
they
were
incredibly
useful
in
building
intuition
about
the
network
and
about
the
falcon
system,
while
also
amending
configuration
parameters
in
order
to
run
faster
tests.
So
we
were
always
using
scout
down
block
delays,
proving
windows,
deadlines,
etc.
On
this
screenshot,
you
can
see
the
table
that
is
generating
for
a
specific
miner
and
which
deadline
they're
into
and
what
sectors
they
have
to
prove.
So,
that's
very
handy.
A
If
you
want
to
understand
how
let's
say
we
missed
window
post
proofs
are
handled
if
they're
missing.
A
A
In
the
process
of
writing
the
test,
kit,
library
and
the
test
lens,
we
improved
the
overall
testability
of
lotus
itself
and,
last
but
not
least,
everyone
on
the
only
team
learned
more
about
the
internals
of
lotus
than
compared
to
what
we
knew
prior
to
starting
the
project.
As
a
result,
a
lot
of
the
knowledge
about
the
system
was
shared
more
broadly
within
the
pl
organization.
A
So,
let's
talk
a
bit
more
about
where
we
are
planning
to
go
on
from
here,
so
testground
was
used
to
validate
performance
and
simulate
the
tax
scenarios
for
go
ipfs
and
gossip
sub.
Earlier
this
year
we
used
test
runs
to
validate,
go
ipfs,
0.5
and
the
performance
of
the
new
dht
improvements
and
for
gossip
sub.
We
ran
a
number
of
scenarios
to
simulate
various
attacks
and
make
sure
that
the
network
doesn't
degrade.
When
that
happens,
we
launched
the
public
release
of
test
ground
to
the
community
in
may.
A
A
We
want
to
provide
a
platform
for
developers
to
be
able
to
easily
schedule
test,
runs
and
collect
and
visualize
results
for
their
own
understanding
and
also
for
the
community
to
see
this
is
mostly
possible
today,
but
it's
rather
hard,
and
there
is
a
learning
curve
involved,
so
developers
still
have
to
be
responsible
for
their
own
infrastructure
today,
for
example,
if
you
want
to
run
a
kubernetes
cluster,
that
sort
of
so
that
you
can
run
test
plans
that
have
higher
requirements
for
resources,
you
also
need
to
know
how
to
operate
that
ground.
A
So,
even
though
all
of
this
is
documented
and
automated,
as
I
mentioned,
there
is
a
learning
curve
and
it
takes
valuable
time
away
from
developers.
Developers
are
interested
in
writing
test
plans
and
actually
verifying
their
own
software
and
not
running
infrastructure
and
operating
test
ground.
So
we
want
to
simplify
this
process
and
we
call
this
effort
test
ground
as
a
service.
A
So
currently
it
hosts
outputs
and
test
results
from
some
of
the
test
plans
that
we've
implemented
over
the
last
few
months
as
part
of
project
only
so
they
run
on
a
periodic
basis
and
confirm
that
we
are
not
introducing
regressions
in
lotus
from
an
end-to-end
perspective.
A
So
on
this
on
this
dashboard,
you
can
see
that
we're
running
the
graph
sync
test.
The
deals
stress
test,
the
deals,
end-to-end
test
and
the
payment
channel
tests,
so
the
outcomes
column
is
representing
an
aggregate
view
of
the
different
instance
group
that
are
part
of
this
test.
For
example,
in
the
deals
tests
we
have
a
bootstrapper
node
and
clients
and
miners
where
clients
select
a
random
miner
and
initiate
storage
and
consequently,
retrieval
deals
with
that
miner.
So
currently,
those
tests
are
triggered
from
the
only
repository
on
github
on
an
hourly
basis
and
in
the
future.
A
A
We
also
continuously
run
benchmarks
for
important
protocols
and
libraries
used
in
lotus,
such
as
the
gograph
sync
library,
and
make
sure
that
their
performance
improves
over
time.
So
this
dashboard
provides
an
aggregate
visualization
of
all
the
recent
graph
sync
test
runs,
so
we
run
the
graphing
test
plan
with
a
variety
of
input.
Parameters
such
as
the
network,
latency
bandwidth,
a
concurrency
factor
for
the
transfers
and
size
of
the
data
to
be
transferred.
A
We
then
visualize
a
data
point
for
each
test
run
if
an
improvement
or
regression
was
to
be
introduced
in
the
go
graph.
Sync
implementation
that
would
be
visible
here,
so
these
dashboards
are
still
working
progress.
But
this
type
of
insight
in
extraction
is
what
is
on
the
roadmap
for
the
test
ground
to
the
service.
A
A
So
all
this
is
open
source
and
here
are
a
few
links
to
the
test
plans
that
we've
implemented
as
well,
that
ground
itself
use
those
projects
and
feel
free
to
contribute
back.
A
If
you
have
any
ideas
or
suggestions
on
how
to
test
distributed
systems
and
provide
valuable
insight
into
them
in
any
alternative
way,
something
that
we
haven't
thought
about.
We
would
love
to
hear
from
you
on
that
as
well
and,
last
but
not
least,
a
big
shout
out
to
the
oni
team
to
the
lotus
team,
to
the
spec
actors
team
to
the
test,
ground
team
and
everyone
else
who
contributed
to
the
project.