►
From YouTube: CSCON[0] Raúl Kripalani - Testground: A Platform for Testing P2P and Distributed Systems at Scale
Description
Raúl Kripalani, Techlead at Protocol Labs, explains: Testground, a platform used for testing P2P and distributed systems at scale.
At the First Ever CSCON[0] Virtual Event!
A
Joined
by
an
incredible
contributor,
not
just
to
a
single
protocol
but
to
really
most
of
the
modern
peer-to-peer,
blockchain
protocols
that
are
out
there
due
to
his
work
on
lit
p2p
and
a
whole
bunch
of
other
tools
and
we're
excited
to
hear
from
him
directly
in
a
second
earl
triplani
is
a
tech
lead
at
protocol
labs
and
he's
going
to
be
speaking
about
test
ground,
which
is
a
platform
and
distributed
systems
at
scale.
Hey
roll
how's,
it
going.
B
Awesome
so
I'm
just
going
to
quickly
share
my
screen
because
I
do
have
a
block
presentation.
What
does
this
mean
all
right?
I
think
we're
on
now
all
right.
Okay.
So
thanks
a
lot
for
having
me
it's
great,
it's
great
to
be
here.
I
mean
it's
great
that
you
organized
this
conference.
Chainsafe
is
a
great
collaborator
in
all
regards.
I've
worked
with
a
bunch
of
chainsafe
people
on
a
bunch
of
projects
and
it's
always
been
a
great
experience.
B
So
it's
great
to
see
second
zero,
and
I
hope
there
are
many
more
to
come
and
I
hope
that
everybody
is
safe
wherever
you
are
in
the
world.
Just
take
care
of
yourselves
all
right.
So
today
I
want
to
cover
test
ground
test.
Ground
is
a
platform
that
we
developed
at
protocol
labs
because
we
needed
it
and
we
are
we.
We
hope
that
a
lot
of
people
need
it,
and
I
think
that
a
lot
of
people
need
this
kind
of
testing
so
and
a
lot
of
projects
in
the
space.
B
Of
course,
I'm
going
to
I'm
going
to
talk
about
ipfs,
libby2b,
falcoin
and
resnetlab,
which
is
which
stands
for
resilient
networks
lab,
which
is
a
practice
here
at
pl
research
protocol,
labs
research,
how
they're
using
test
ground
and
what
kind
of
projects
we
were
able
to
land,
particularly
in
2020,
as
we
created
test
ground
to
be
able
to
work
on
those
projects
to
work
on
those
protocol,
improvements
that
otherwise
would
have
been
impossible
to
ship
and
how
they
achieved
that.
I'm
also
going
to
talk
about
how
test
grant
works.
B
I
think
a
lot
of
people
here
would
be
interested
in
knowing
kind
of
like
the
details
of
that
I'm
going
to
cover
the
basic
concepts
today,
and
I'm
also
going
to
talk
about
our
roadmap.
What's
next
for
for
a
test
ground
and
I'll
leave,
some
room
for
q
a's
so
feel
free
to
punch
those
in
as
they
arise.
B
A
So,
let's
start
what
is
testground.
B
A
B
Built
from
the
ground
up
to
be
a
completely
independent
project,
because
we
believe
that
a
lot
of
projects
in
the
space
need
the
kind
of
reproducible
testing
and
the
kind
of
simulation
that
test
ground
allows
you
to
do
it's
designed
to
be
multilingual
and
it's
runtime
agnostic,
and
basically
this
means
that
you
can
write
test,
grant
test
plans
in
any
language.
B
The
platform
will
is
able
to
deal
with
test
plans
written
in
any
language
right
now
we
have
an
sdk
for
go
where
there's
a
there's
work
in
progress
for
a
javascript
sdk
and
we're
really
really
keen
to
see
a
russ,
sdk
and
and
more
more
sdks
and
other
languages.
I'll
talk
about
what
the
sdk
is.
It's
a
very
lightweight
component,
so
that's
very
easy
to
implement
and
it's.
A
B
Agnostic,
this
means
that
you
can
run
test
plans.
A
single
test
plan
written
using
the
test
grant
framework
can
be
run
in
your
local
machine
using
executables.
It
can
run
in
docker
containers
which
give
you
nice
features
as
I'll
discuss
later
and
also
in
you
can
scale
it
out
in
a
kubernetes
cluster
of
any
size.
Right
now
we
have
tested
up
to
10
000
instances
of
a
test
plan
cool.
B
This
is
just
kind
of
like
to
give
it
a
little
bit
of
movement,
and
so
you
can
see
kind
of
like
what
the
heart
of
test
ground
is.
This
is
just
kind
of
like
a
demo.
That's
running
a
dht
test
plan,
I
believe
so
when
I
say
dht
test
planet.
So
it's
a
test
plan
that
that
hits
and
targets
the
liby2pdhd
and
it
tries
to
basically
spin
up.
I
think
this
plan
is
spinning
up
100
nodes.
It
is
querying
for
all
yeah.
B
B
Now,
why
test
ground
I've
already
kind
of
like
given
given
a
few
touches
on
this,
but
I
think
it's
really
important
that
I
convey
this
the
motivation
that
led
us
to
to
build
test
ground
in
the
first
place.
First
of
all,
we
all
know
that
building
distributed
systems
is
hard
and
testing
distributed
systems
is
even
harder
and,
what's
even
harder,
is
like
hard
times.
You
know.
B
One
million
is
evolving
decentralized
systems
right,
especially
decentralized
systems
that
have
that
have
hundreds
of
thousands
of
users
right
and
have
node
populations
of
those
amounts
right,
because
a
single
algorithmic
change
in
a
single
place
might
be
easy
to
reason
about.
It
might
be
like
hey.
B
Might
look
like
the
right
thing
to
do,
but
getting
from
kind
of
like
this
is
the
thing
that
that
we
believe
would
be
an
improvement
to
confirming
that
that
improvement
is
going
to
get
the
desired
effects
at
the
level
of
scale
that
a
network
could
be
running
on
kind
of
like
by
the
emergent
effects
of
a
single.
You
know
little
change,
that's
the
thing
that
makes
these
changes
really
reason
really
hard
to
reason
about
right.
B
In
general,
I
would
say
that
ibfs
and
libby2b,
and
if,
if
you
hang
out
in
kind
like
you
know
all
the
communities
of
the
projects
that
were
sponsored
by
protocol
labs
or
started
by
protocol
labs,
you
will
see
that
there
is
kind
of
like
an
orientation
to
data
backed
engineering
right.
We
like
to
make
decisions
based
on
data
right
and
to
make
decisions
based
on
data
for
protocols.
B
We
need
reproducible
tests
right
because
the
test
would
need
to
like
exercise
a
part
of
the
code
base
and
yield
the
same
result
everywhere
right,
because
otherwise
you
know
it's
very
hard
to
take
decisions
and
having
that
kind
of
tooling
allows
us
to
gain
confidence
in
our
changes.
So
there
were
many
times.
You
know
during
the
history
of
these
projects,
of
of
the
projects
that,
like
living
to
be
ibfs,
that
we
wanted
to
make
specific
changes,
and
this
seemed
like
that
would
be
the
right
thing
to
do.
B
But
there
was
kind
of
like
a
lot
of
analysis.
Paralysis
in
you
know
the
community
in
the
team,
because
it
was
a
complex
change
and
we
didn't
know
what
the
impact
could
be.
We
couldn't
like
project
the
impact
and
simulate
the
impact
on
the
network,
so
having
a
platform
like
test
ground
brings
in
rigor
to
the
engineering
process
and
to
end-
and
tesco
is
really
good
at
this,
because
it
supports
various
workflows.
It
supports
continuous
testing
things
like
comparative
a
b
testing
right.
B
So
if
you
want
to
compare
how
a
given
commit
or
a
given
release
performs
against
another
one,
you
can
do
that
writing
a
single
test
plan.
It
supports
backwards,
compatibility,
testing,
iteration,
prototyping
and
so
on.
B
For
us,
we
truly
believe
that
test
ground
has
been
a
massive
accelerator
for
kind
of
like
the
projects
that
we've
been
wanting
to
pursue
and
maybe
to
be
an
ibfs
and
filecoin
and
also
the
decisions
that
we
needed
to
make
technical
decisions
when
it
came
to
their
protocol.
It
has
helped
us
validate
ideas
for
drastic
redesigns
and
improvements,
and
these
are
some
of
the
things
that
it
helped
helped
us
with,
and
we
actually
built
test
grounds
because
we
needed
tooling
to
to
to
address
all
these
challenges
and
we
couldn't
find
it
anywhere.
B
So
we
decided
to
build
it
for
for
the
community.
One
of
the
the
first
projects
that
tesla
was
used
in
was
for
the
content,
routing
and
bit
swap
improvements
that
made
it
into
ipfs
0.5.
B
Now
we
know
that,
if
you're,
if
you're
familiar
with
with
ibfs,
you
will
know
that
the
dht
and
bitswap
are
critical
components
of
of
ibfs.
They
kind
of
like
the
heart
of
ibfs.
The
problem
that
we
had
is
that
the
quality
of
the
ipfs
dhd
had
deteriorated.
B
We
had
we
had
huge
node
populations
and
unfortunately,
many
of
those
nodes
were
sitting
behind
nuts
and
they
were
not
dietable
so,
but
still
they
wanted
to
participate
in
the
dht
and
somehow
they
got
records
placed
on
them
and
also
the
iteration
logic
needed
some
a
bit
of
work.
We
had.
B
And
literally
like
we
spent
months
discussing
ideas,
but
honestly
it
was
like
this.
This
was
one
of
the
examples
of
analysis,
paralysis.
We
were
hesitant
to
touch.
You
know
these
parts
of
of
the
code
base
because
they
were
really
the
key
pieces
of
ipfs
and
the
wrong
decision
could
have
really
bad
effects
on
an
already
running
network.
So
this
is
like
kind
of
like
the
breaking
point
where
we
really
decided
to
to
invest
in
in
building
our
test
grant.
B
With
for
this
project,
we
were
able
to
spin
up
networks
of
up
to
1000
nodes
and
also
simulate
nuts
and
many
of
the
behaviors
that
we
expect
that
we
had
seen
in
the
wild
that
we
wanted
to
that.
We
wanted
to
correct
the
dht
to
be
tolerant
right,
so
things
that
we
did
word
build
experiments.
We
measured
the
results
we
iterated
over
and
over
again
or
like
at
least
1000
launches
of
test
test
test
runs.
B
We
can
pet
results,
verified
backwards,
compatibility
between
one
version
and
another,
and
then
this
helped
us
launch
the
thing
now.
The
second
project,
where
test
ground
was
super
useful,
was
in
in
testing
the
assumptions
and
testing
the
changes
and
the
improvements
that
we
wanted
to
make
to
gossip
sub
to
introduce
the
security
hardening
extensions.
B
So
if
you're
familiar
with
with
five
coin
and
eth2,
for
example,
these
two
networks,
the
pub
sub
layer,
is
powered
by
a
protocol
called
gossip
sub
1.1,
and
this
is
part
of
the
w2b
stack
now
gossip
sub
1.1,
the
the
idea
and
the
design
proposed
introducing
pure
scoring
things
like
adaptive,
gossip
dissemination,
peer
exchange,
opportunistic
grafting-
you
can
read
all
about
this
in
the
spec,
I'm
not
just
coming
up
with
fancy
names.
These
are.
These
are
really
interesting
mechanisms
to
make
gossip
sub
1.1
secure
and
attack
resistance.
B
We
simulated
we
created
attacks,
that
we
know
we're
going
to
succeed
against
1.0
and
then
we
played
them
played
them
against
1.1
and
verified
that
the
decisions
we
had
made
were
the
correct
ones
to
deter
those
attacks
and
also
tesla
also
helped
us
to
tune
the
parameters
for
for
five
point
and
really
this
project
to
test
ground
to
a
new
level
of
scalability
to
this
project
required
running
up
to
ten
thousand
nodes
in
in
a
cluster
according
to
a
choreography
and
so
on,
to
to
actually
carry
out
the
test
plan,
the
the
test
project
each
test
ramp.
B
B
We
we
ended
up
creating
an
integration
with
jupiter
notebooks,
which
we
want
to
it's
open
source,
but
we
want
to
integrate
it
into
into
test
ground
proper
as
part
of
our
work
I'll
talk
about
that
later,
but
on
the
left,
you
can
see
a
bunch
of
diagrams
that
were
generated
from
from
that
raw
data
that
was
coming
from
test
ground
runs
and
we
ended
up
publishing
a
paper
as
well.
B
So
if
you
look
it
up,
you
can
look
up
gossip
sub
paper
and
and
you'll
find
it
another
project
that
test
ground
was
super
helpful,
for
it
was
lotus,
stress,
testing
and
not
just
stress
testing.
There
was
a
bunch
of
testing
end-to-end
testing
themes
that
we
were
able
to
to
address
with
with
testground.
We
spun
up
just
performing
it.
The
months
before
maynet
we
spun
up
a
tactical
effort
called
project
tony
to
build
network
validation
tests
very
quickly
and
things
that
we
tested
were
deals
payment
channels.
B
I'm
not
I'm
not
sure
how
many
people
are
familiar
with
filecoin
concepts,
but
I
just
have
a
few
deals:
payment
channels,
chainsync
windowpost,
we
also
tested
d,
random
unavailability
and
how
the
chainsync
and
the
chain
could
cope
with
that
and
what's
nice
is
that
this
project
actually
catalyzed
many
improvements
in
taskcode
itself,
and
these
improvements
are
of
course
available
for
any
user
of
tesla.
B
B
Layer,
there's
a
bunch
of
chatter
and
gossip
going
on
there.
B
Nodes
are
broadcasting
wands
and
haves
and
so
on,
and
there
was
some
information
that
we
thought
we
could
piggyback
onto
and
leverage
for
higher
efficiency
and
our
researchers
at
resnetlab
use
testground
to
validate
that
now
enough
about
how
test
ground
has
been
beneficial
to
the
users
that
have
been
using
it
so
far
and,
of
course,
to
this
list
there
are
other
users
like
query
dot
io,
for
example,
e3
has
been
experimenting
with
with
test
ground
itself,
with
test
ground
as
well
and
there's
a
bunch
of
other
other
uses
as
well.
B
I
don't
know
the
use
cases
very
well,
that's
why
I
haven't
covered
them,
but
it's
definitely
worth
it.
I
checking
them
out
now.
How
does
test
ground
proper
work?
I'm
gonna
go
through
kind
of
like
a
series
of
10
steps.
This
is
not
like
a
logical
flow,
it's
kind
of
like
it's
not
the
way
that
you
would
do
things
in
in
practice,
but
it's
got
it's
a
very
nice
logical
flow
to
explain
and
to
end
how
test
run
works.
B
First
of
all,
the
the
first
thing
I
want
to
cover
is
the
programming
model
now
with
test
ground.
If
you
look
at
other
testing
efforts
out
there,
they
all
focus
on
okay,
say
we
have
file
coin
or
an
eth2,
client
or
something
else.
I'm
gonna.
Like
those
testing
efforts,
those
platforms
tend
to
focus
on,
let's
deploy
a
thousand
copies
of
the
daemon
itself
and
then
let's
puppeteer
it
from
from
a
script
right
that
is
hitting
the
demons
in
a
particular
order.
That
is
calling
specific
methods
that
is
changing
settings
and
so
on.
B
That
approach
is
very
very
little
and
it's
it's
quite
brittle
with
test
ground
you're,
actually
testing
against
the
local
you're,
hitting
the
local
apis
of
the
program
or
the
application
under
test.
B
So
literally,
it's
almost
as
if
you're
sitting
inside
you're
not
remote
controlling
from
the
outside,
and
this
is
pretty
cool
because
it
gives
you
the
ability
to
to
tweak
and
fine-tune
parameters,
and
it
doesn't
with
the
other
approach
it
would
have
to
like
expose
every
single
thing
that
you
wanted
to
configure
in
a
test
via
an
api
right,
and
that
is
very
cumbersome.
It's
not
safe
it
like
it.
Doesn't
it
doesn't
it's
not.
It
doesn't
yield
very
good
velocity
with
test
ground
tests
that
are
written
for
test
ground
tend
to
look
like
unit
tests.
B
They
feel
very
natural
now,
on
top
of
kind
of
like
this
unit
test.
What
you
do
is
you
overlay,
a
coordination,
you
overlay
some
coordination,
logic
right
and
you
coordinate
instances
via
a
redisback
sync
service
api.
It's
super
simple,
very
simple:
it
has
two
primitives,
essentially
that
unlock
a
lot
of
distribute,
distributed
coordination
patterns
that
you
can
that
you
can
build
like
atomic
sequences
locks
and
and
sharding
and
leader
election
and
a
bunch
of
things
right.
So,
basically
you
can.
You
have
built
your
test
plan.
B
You
have
built
the
logic
of
the
test
bar
and
now
you
want
to
wait
at
specific
places,
for
instance
for
certain
instances
to
do
something
or
you
want
to
share
data
like,
for
example,
you
know
in
the
libby
to
be
test
plan.
You
can
start
a
libby
to
be
host,
and
then
you
want
to
share
the
multi-address,
for
example,
the
listening
addresses
so
that
other
peers
can
connect
to
you.
All
this
is
is
what
the
what
the
coordination
and
sync
services
is
is
meant
to
do
now.
B
These
two
things
give
rise
to
the
programming
model
of
of
tesco,
which
is
essentially
test.
Plans
are
a
distributed
state
machine.
So
there
is
no
special
conductor.
Instance
that
is
telling
instances
that
is
telling
all
other
instances
what
to
do.
The
choreography
emerges
from
itself
via
this
distribute
distributed,
distributed
coordination
idea.
This
makes
plans
extremely
robust
and
reproducible
and
especially
not
subject
to
to
a
single
point
of
failure
right.
The
coordinator,
so
it's
almost
like,
like
your
entire
test
plan,
is
a
distributed
state
machine
now.
B
Something
that
you
can
do
within
test
plans
is
is
setting
network
traffic
shaping
shaping
policies.
So
this
is
super
important
for
testing
protocols
right.
You
want
to
be
able
to
test
as
different
different
network
configurations,
and
you
want
to
be
able
to
test
latencies
jitter
corruption,
packet
laws.
B
Whether
peer
is
connected
and
disconnected
intermittent
connections
like
all
these
things
are
pretty
important
to
test
the
robustness
of
of
a
protocol
right
and
of
the
implementation
of
that
protocol.
So
this
is
a
first
class
citizen
in
in
in
test
ground
and
the
api.
All
the
examples
that
I'm
posting
that
I'm,
that
I'm
kind
of
like
capturing
here
in
these
screenshots
are
go
examples,
but
we're
working
on
the
communities.
A
B
On
on
a
js
sdk,
and
of
course,
if,
if
you
want
to
work
on
a
rust,
sdk
or
something
like
that,
just
just
get
in
touch
of
course,
network
traffic
shaping
policies
can
change
during
the
test.
So
if
you
combine
this
with
a
with
a
choreography.
A
B
It
gives
rise
to
a
very
powerful
combination
where
nodes
can
coordinate,
to
change
their
network
configuration
or
to
change
their
network
behavior
right,
the
networking,
the
the
characteristics
and
the
quality
of
service
of
the
network
link.
So
you
can
simulate
things
like,
for
example,
a
given
node
having
a
brittle
connection
going
in
and
coming
having
very,
very
high
variance
of
latency
and
things
like
that,
and
you
can
coordinate
how
those
things
change
across
the
cohort
of
instances
via
the
sync
service.
B
To
like
in
in
a
single
test,
plan,
combine
instances
that
are
gonna,
have
different
behavior
right,
like,
for
example,
fast
and
slow
groups
with
high
and
slow
high
and
low
latencies
or
nodes,
with
a
cache
nodes,
without
a
cache
and
so
on,
and
what's
interesting
is
that
you're
at
the
end
of
the
test,
the
test
outputs
are
categorized
by
group
and
all
the
metrics
as
well
that
we
publish
I'm
going
to
talk
about
that
in
a
second,
are
also
attacked
with
a
with
a
group,
so
you
can
tell
what
groups
produce
which
which
results
now.
B
Another
thing
that
we
needed
was
to
be
able
to
pitch
a
bunch
of
versions
of
the
same
code
together
and
see
how
it
how
it
behaves
right.
So
this
can
be
this.
This
this
feature
is
what
I'm
calling
multi
multiversion
tests
and
it
allows
you
to
in
a
single.
It
allows
you
to
do
two
things
in
a
single
test
run.
B
You
can
combine
instances
of
say,
for
example,
dht1
and
dht2
right
in
a
single
test,
run
to
test
compatibility
protocol
compatibility
between
those
instances,
because
we
all
know
that,
like
a
very
desired
property
of
protocol,
evolution
is
to
have
backwards,
compatibility
right
and
it's
it's
very
hard
in
practice
to
test
this
right.
So
tesla
allows
you
to
do
that.
B
Another
thing
that
you
can
do
as
well
is
do
comparative
testing,
so
you
don't
merge
the
in
a
single
test
run,
you
don't
merge
a
cohort
with
a
version
and
a
cohort
with
another.
B
Just
run
a
single
version
run
and
another
single
version
run
and
then
compare
the
results
right
now.
One
thing
with
api
evolution
is
that
it
might
break
between
versions.
So
if
you
have
a
single
test
plan
that
is
targeting
two
different
versions
of
the
same
api.
Well,
if
the
api
changes,
then
it's
not
going
to
compile
for
one
of
those
versions
right.
So
what
testground
allows
you
to
do
is
to
set
selectors
which
basically
select
the
files
to
compile
for.
A
B
Group
and
in
the
case
of
the
the
go
builder
this
is
implemented
by
using
by
using
build
tags
yeah,
so
that
allows
you
to
like
create
shins
and
and
a
bunch
of
things.
Now,
of
course,
you
wanna,
like
your
test
plans,
are
gonna,
be
doing
something.
Are
they
gonna,
be
they're
gonna,
be
recording
data
points,
they
want
to
record
data
points
and
metrics,
and
so
on.
So
the
testground
sdk
already
offers
you
all
these
capabilities
and
everything
that.
B
So
when
you
record
a
metric,
when
you
record
the
status,
you
can
also
everything
is
output
in
files
and
also
to
to
influx
db,
which
I'll
talk
about
that
in
in
a
second
and
also
you
can
emit
you
can
emit
assets,
so
you
can
emit
like
raw
data
and
dumps
and
logs
and
so
on
now.
Another
thing
is
that
you
yeah,
so
so
you
can
build
and
run
the
test
plan.
Basically
you
do
that
by
by
creating
a
tunnel
configuration
and
you
submit
it
to
test
via
command.
B
So
there
are
testground
has
a
bunch
of
runners
and
builders.
These
are
the
things
that
compile.
B
The
things
that
run
a
test
plan
and
the
one
key
feature
that
I
really
wanted
to
implement
in
test
ground
was
this
isomorphic
the
concept
of
isomorphic
test
plans,
so
the
ability
for
a
single
test
plan
to
run
in
different
environments,
because
there
are
different
stages
of
development,
of
a
test
plan
where
you
require
quick
fee,
quick
feedback
which
is
usually
the
opposite
of
launching
at
scale.
B
So
if
you
want
to
run,
if
you
want
to
build
a
test
plan
that
ultimately
targets
like
10,
10k,
10k
instances,
you're
not
gonna
you're,
not
going
to
run
it
the
first
like
while
you're
developing
every
single
iteration,
you
don't
want
to
have
to
submit
it
to
the
kubernetes,
because
that
creates
a
lot
of
latency
in
that
feedback
developer
feedback
cycle.
So
we
created
the
system
of
a
modular
builders
and
runners
where
a
single
test
plan
you
can
build
it
into
an
executable
and
run
it
locally.
B
You
can
which
gives
you
a
very
quick
feedback
feedback
loop,
and
it's
really
good
for
rapid
iteration
and
for
the
actual
development
of
the
test
plan
and
when
you're
ready
to
promote
it
and
to
like
start
testing
it
at
larger
scales.
You
can
build
it
into
a
docker
container
and
then
you
can
use
the
local
docker
runner
to
run
it
locally
or
you
can
submit
it
to
a
cluster
that
we
provide
to
a
kubernetes
cluster.
B
By
the
way
we
provide
all
the
playbooks
and
the
scripts
in
the
repo
to
instantiate
a
testground
cluster
on
aws.
B
So
of
course,
I've
I've
been
talking
about
how
instances
produce
results
and
record
points
and
record
observations,
and
so
on.
You
want
to
analyze
those
points
right
so
automatically
testground
publishes
all
the
the
sdk
publishers,
all
those
outputs
into
influx
db,
and
then
you
can
connect
something
like
chronograph
or
grafana
and
explore
export
that
at
last
you
also
want
to
be
able
to.
So
if
your
test
ground
is
emitting
assets,
your
test,
ground
test
plan
is
emitting.
B
For
example,
files
or
dumps,
or
logs
or
whatever
you
want
to
be
able
to
collect
them,
so
we
provide
a
command
where
you
know
after
the
test
plan
is
run,
you
can
collect
all
of
the
assets
that
are
duly
categorized
by
group
and
container
and
so
on
and
process
them
by
processing
scripts
or
just
explore
them
manually
all
right.
So
I
covered
kind
of
like
the
android
and
to
end
up
testing.
I
could
like
talk
about
test
run
for
for
an
hour,
but
but
yeah
we
don't
have
that
time.
B
You
can
check
out
the
docs
dogs.testground.ai.
Is
we
put
a
lot
of
love
into
the
docs,
so
definitely
check
them
out.
I
think,
and
I
think
they
are.
They
explain
very
well
many
of
these
like
how
to
develop
with
test
ground.
If
you
notice,
you
know
opportunities
for
improvement.
Just
just
let
us
know
now
what
is
next,
let's
create
some
excitement
here,
there's
a
there's,
a
cat
being
launched
into
into
space.
I
think
it's
a
cat,
I
don't
know
what
it
is
right
now.
B
Well,
I
don't
know
anyway,
tesco
as
a
service,
so
testcount
started
being
an
interactive
thing.
Where
you
have
a
demon
running,
you
have
a
backing,
kubernetes
cluster,
maybe,
and
you
submit
jobs
to
it
manually.
Now
we
want
an
automated
workflow
for
continuous
integration
to
integrate
with
continuous
integration
and
also
because
it
it
is.
It
allows
us
like
it's.
B
This
really
nice
place
where
you
could
archive
the
history
of
a
test
plan
and
all
the
runs
it
has
associating
it
with
a
branch
with
a
commit
with
a
bunch
of
things,
and
then
you
can
explore
the
evolution
of
that
part
of
the
code
base
as
measured
by
the
test
plan
right.
So
this
is
really
nice
to
have
because
it
allows
you
to
detect
regressions
and
improvements
very
quickly
right.
B
So
if
you
have
like
our
holy
grail
is
to
have
basically
like
this
is
kind
of
like
the
initial
version
of
test
round
right
and
we've
had
to
like
lay
every
break
before
we
could
like
get
to
the
get
to
the
ceiling
get
to
the
roof.
B
B
A
B
Awesome
yeah
so
I'll
just
wrap
up
here
with
kind
of
like
some
some
resources,
if
you,
if
you're
curious
about
kind
of
like
the
motivation
and
and
kind
of
like
getting
the
story
and
also
understanding
once
again,
what
test
ground,
what
test
ground
is
and
looking
at
it
in
your
own
time,
then
definitely
we
read
the
launch
post,
we
launched
test
ground
and
we
made
it
ga
general
availability
so
to
speak
or
kind
of
like
you
know
the
first
proper
release
of
test
ground,
another
pre-release
and
may
earlier
this
year,
and
that
is
that
is
the
the
post.
B
Definitely
read
that
I
already
mentioned
it
earlier.
If
you
want
to
take
a
look
at
talks
and
dig
in
to
dig
into
stuff
with
dig.
A
B
Little
bit
more,
then
take
a
look
at
the
docs
website.
Of
course,
if
you're
interested
in
test
ground,
we
always
welcome
contributions.
We
are
a
very
open
community.
I
definitely
would
welcome
you
taking
it
out
for
a
spin
and
also
putting
a
star
if
you
want
forking
it
and
playing
playing
with
it
and
submitting
submitting
issues
and
put
requests
just
to
sum
up
how
to
get
started
with
test
ground.
Read
the
readme.
B
Definitely
take
a
look
at
the
docs
started,
the
get
getting
started
section
and
personally,
I
recommend
that
for
an
initial
first
test
plan,
you
take
a
look
at
the
ping
at
the
libby
to
be
ping
test
plan,
because
I
wrote
it
a
few
months
ago
when
I
needed
to
do
another
demo
and
it's
and
I
made
that
test
plan
didactic
right
with
a
lot
of
comments
that
explains
step
by
step,
what's
happening
and
the
reason
and
why
and
it
provides
a
lot
more
cross-links
to
a
lot
more
resources
cool
all
right.
B
A
Awesome
yeah,
so
how
does
first
of
all?
Thank
you.
I
don't
rush
too
too
fast.
Thank
you
so
much
for
this
awesome
presentation.
I
think
eric
said
it
best
that
in
in
the
chat
that,
basically,
where
is
it,
where
is
it?
A
You
know
absolutely
appreciative
of
everything
that
you're
doing
it's
so
incredible
and
something
that
you
know
so
many
different
projects
are
going
to
be
able
to
take
advantage
of
scaling
so
yeah.
Let
me
let
me
get
to
these
questions.
So
how
does
this
compare
to
other
distributed
systems?
Testing
frameworks
like
jepson.
B
Yeah,
so
I
think
I'm
not
super
familiar
with
jefferson
because
it
didn't
like
come
in
my
radar
when
I
did
the
research
it
came
after
and
I
haven't
had
the
time
to
like
properly
dig
into
it,
but
from
what
I've.
What
I
understand
it
is
meant
to
test.
It
is
oriented
towards
security
and
safety
testing
right.
Not
so
much
like
it's
it's
good,
it's
good
to
kind
of
like
take
a
single
node
and
hit
it
with
a
bunch
of
vectors
and
a
bunch
of
inputs
and
verify
what
the
results
are.
B
What
we're
looking
for
with
test
ground
was
to
conduct
real-world
scenarios
right,
which
is
hey.
We
know
there
are
these
conditions
that
are
impossible
to
simulate
in
our
network
or
there's
going
to
be
these
conditions
like,
for
example,
you
know.
B
B
Getting
to
that
point
of
confidence
is
the
thing
that
that,
especially
when
you,
when,
when
you're
working
with
decentralized
systems,
which
you
don't
control,
whose
deployment
you
don't
control,
you
want
to
make
sure
that
the
code
that
you've
put
out
there
is
is
is
good
because
you
don't
control
when
others
are
going
to
upgrade
right.
So
definitely
that
is
the
thing
that
gives
us
more
confidence.
A
Absolutely
I
think,
people
don't
really
appreciate
how
much
you
know
also
that
that
confidence
side
of
things
really
plays
a
big
role,
because
at
the
end
of
the
day
you
know
traditional
test.
Suites,
don't
actually
give
you
that
confidence,
but
they
do
give
you
kind
of
at
least
a
checked
box.
So
to
have
things
like
this
to
take
it
to
the
next
level
is
vital
to
being
able
to
roll
out
new
innovations
as
they
come
along.
A
So
for
our
next
question
sounds
like
this
can
be
used
to
test
out
ideas
ipfs,
but
it
can
also
test
a
heterogeneous
network,
so
an
ipfs
network
with
go
node,
rust,
nodes
and
js
nodes.
If
so,
how
has
that
been
done?
And
I
think
this
is
relevant
to
our
next
question
too,
which
is
that
you
said
that
there's
an
sdk
for
go
and
one
in
the
works
for
js.
However,
you
also
said
that
it's
language
agnostic.
B
Those
are
those
are
really
really
good
questions,
so
I'll
start
with
the
with
the
first
one.
Yes,
this
is
jolene
the
roadmap.
We
like
won
one
goal
of
testcoins
to
enable
interoperability
testing
at
scale,
so
this
is.
This
is
definitely
on
the
roadmap.
The
idea
is
that
really,
so
I
didn't
talk
about
that.
I
didn't
talk
about
this,
but
a
test
ground
test
plan
has
a
contract
and
it
receives
a
formal
runtime
environment
which
is
in
the
form
of
of
of
environment
variables.
That's
what
it
is
right.
B
It's
like
basically
a
a
program
that
receives
environment
variables
and
does
a
bunch
of
things
with
them
right.
B
One
of
those
environment
variables
is
a
url
to
the
sync
service
and
another
one
is
a
url
to
the
influx
cv
right
and
like
all
these
things
then
are
built
as
inside
the
sdk,
which
is
a
very
lightweight
thing,
which
is
just
providing
a
nice
api
to
interact
with
a
sync
service
in
the
semantics
that
that
it
needs
and
as
well
with
inflexibility
in
the
semantics
that
then
other
tooling,
can
you
know,
process
information
information
on.
B
B
The
logic
of
the
test
plan
across
that,
but
that
that's
why
you're
doing
interoperability
testing
anyway
and
for
the
last
question,
there's
an
sdk
for
go
and
one
of
the
works
for
js
yeah,
it's
language
agnostic.
How
do
you
use
this
for
other
languages
if
there's
no
sdk
right,
so
the
sdk
sugar
right?
The
sdk
provides
you
with
the
the
features
the
user
facing
features
that
you
that
you
benefit.
A
B
To
enter
by
for
interacting
with
with
test
ground
and
other
instances,
so
there
is,
I
think
there
is
a
docker.
I
didn't
talk
about
that.
I
didn't
cover
it.
I
think
there
was
a
docker
generic
builder
that
basically
puts
the
logic
on
you
to
generate
to
write
a
test
plan
with
you
could
write
it
without
an
sdk
if
you
wanted
to
in
the
language
that
you
want
as
long
as
you
provide
a
docker
file
that
has
an
entry
point.
B
That
is
the
only
thing
that
test
run
needs
to
then
be
able
to
schedule
that
that
test
plan
right
now.
Of
course,
it's
not
going
to
do
anything
if,
if,
if
it's
not
using
the
sdk,
it's
not
interesting
at
all
right,
because
it
won't
be
able
to
interact
with
other
resistances
by
the
sync
service
or
it
won't
publish
any
metrics
or
anything
like
that,
and
that's
where
the
sdk
comes
in.
So
really
you
can
build
test
analysis
in
any
language.
A
Awesome
awesome
really
great,
really
great
answers.
I
hope
that
that
answers
everything
to
the
people
who
did
ask
those
questions
yeah.
I
know
if.
B
A
A
So,
thank
you
again
role.
I'm
really
excited,
I
feel
like
a
year
ago,
we
were
at
devcon
just
over
a
year
ago,
and
here
we
are
a
year
later
and
so
excited
to
see
a
year
from
now,
where
we'll
be
with
this
incredible
project
and
all
the
chains
that
we'll
be
using
and
all
the
projects
that
we'll
be
using
test
ground
moving
forward.
So
thank
you
so
much
for
sharing
this
with
us
to
all
those
on
youtube.
A
Listening
yeah.
Please
do
check
it
out
and
share
this
with
your
friends,
because
it's
incredible
work,
that's
being
done
by
our
friends
at
protocol
labs.
So
thank
you
so
much
for
joining
us
and
we're
really
excited
to
to
follow
up
with
how
everything
goes.