►
Description
This talk was given at IPFS Camp 2022 in Lisbon, Portugal.
A
A
For
the
past
few
months,
I've
been
working
closely
with
the
lipitp
team.
We
created
the
tool
and
tests
that
are
used
to
verify
the
compatibility
of
go
and
or
sleepy.
So
what
do
we
mean
with
by
interoperability?
Interoperability
testing?
And
why
do
we
care
about
it?
There
are
many
implementation
of
the
lipid2p
specs.
There
is
Javascript
implementation,
Horizon
a
go
implementation,
a
name
implementation
too
right,
Java
and
Mark
with
each
of
these
implementation
comes
Mini,
released
version
2..
A
So
the
goal
of
interoperability,
testing
interoperability
testing
is
to
make
sure
every
implementation
and
every
version
can
communicate
with
each
other.
Obviously,
it's
important
to
us
from
multiple
reasons.
Libtp
is
designed
to
support
large
decentralized
peer-to-peer
networks,
so
the
the
more
implementation
can
talk
to
each
other.
The
up
here
we
are
right.
We
also
rely
on
interoperability
to
provide
cool
features
like
hole
punching
and
we
have
to
assume
users
won't
upgrade
their
versions,
they
should,
but
they
don't
always
upgrade.
A
A
Let
me
share
the
outcome
first,
so
we
all
have.
We
all
know
what
we
are
looking
looking
at.
This
is
what
interoperability
will
look
like.
The
Matrix
on
the
left
is
a
simplified
version
of
what
we
are
working
towards.
You
take
every
golli,
P2P
version,
every
version,
you
run
some
magic
and
it
tells
you
which
version
can
work
together.
A
If
there
is
a
red
dot
somewhere,
it
means
two
version
couldn't
talk
with
each
others
and
I
have
an
interoperability
issue
which
might
be
a
missing
protocol
or
that
there
is
a
bug
somewhere.
So
everything
is
green.
We're
happy
for
now
keep
in
mind.
This
is
a
simplified
version.
We'll
have
many
more
parameters
like
the
type
of
protocol,
the
transport,
the
mixer
Etc.
We
expect
this
Matrix
to
become
huge,
so
our
constraints
are
also
maintainability
and
scalability,
and
another
important
feature
to
us
is
the
ability
to
run
in
CI.
A
So
that's
the
screenshot
on
the
right
and
yeah.
That's
our
two
goals
and
to
create
this
test.
We
use
a
tool
called
tescon,
so
test
ground
is
a
platform
for
testing
benchmarking
and
simulating
a
distributed
system.
It's
designed
to
be
scalable.
We
used
it
in
the
bus
to
simulate
thousands
of
nodes,
and
it's
also
designed
to
be
language,
agnostic
and
runtime
agnostic,
which
is
exactly
what
we
need
here.
Protocol
lab
used
has
gone
before.
We
used
it
to
work
on
ipfs
DHC,
Improvement,
filecon
improvements
and
lip2p
optimization.
A
Other
teams
are
using
test
run
right
now.
Sigma
Prime
is
using
it
to
test
Improvement
on
the
ethereum
network,
and
there
is
also
a
magmo
that
is
using
tesron
to
measure
performances
for
their
product.
Alex
from
the
magma
team
did
share
the
great
presentation
about
his
work
on
test
gone
that
was
during
the
ritual.
Submit
the
video
is
not
online
yet,
but
I
recommend
you
check
it
out
when
it
is
so.
Let
me
try
to
share
enough
detail
about
teslon
to
convince
you
it's
clever
and
that
you
want
to
take
a
look
at
it.
A
A
It's
well
designed
once
tescon
created
the
network
and
started
the
instances
it
pretty
much
disappears.
There
is
no
Global
orchestrator.
So
it's
really
designed
to
minimize
the
noise
in
your
test
result,
which
is
a
very,
very
clever
design,
and
what
we
did
here
is
we
created
a
new
use
case
for
this
gun
with
the
interoperability
work,
conceptually
it's
easy
to
write
a
distributed
test
for
test
run.
A
A
Then
still
has
one
node
in
the
network.
I
send
my
IP
and
my
lip2p
board
to
other
instances
in
the
network.
Everyone
does
the
same
and
we
all
learn
about
each
other
addresses
via
tescon
synchronization
service
when
we
are
done
with
the
synchronization
I
as
node
dial
into
other
nodes
and
I
send
the
ping
n
times.
If
the
connection
and
things
complete,
I
record
the
success
else,
I'll
recall
a
failure.
A
A
A
We
take
our
card,
the
main.go
for
example,
and
turn
it
into
something
that
tescon
can
run
can
build
N1.
In
our
case,
it's
enough
to
be
able
to
create
a
Docker
image.
We
also
spent
some
time
making
sure
we
can
parametrize
the
build
to
use
differently.
P2P
version
because
remember
testing
one
lip
2p
implementation
is
great,
but
it's
not
enough.
We
also
want
to
test
many
lip2p
implementation
and
we
want
this
testing
to
be
maintainable.
A
A
A
A
A
So
we
have
all
these
templating
stuff
that
we
use
to
enhance
our
composition
file,
which
makes
the
Intel
interoperability
testing
slightly
more
complicated.
But
it's
also
it's
a
useful
scripting
language
right.
In
the
end,
as
a
lip
to
be
maintainer,
you
can
just
edit
the
resource
file,
so
the
host.aml
at
the
top-
and
you
never
have
to
look
at
the
build
template
if
you
don't
have
to
so
I
got
my
test
plan.
The
composition
file
and
to
execute
it
I'm
just
going
to
call
teslondren
over
this
composition,
file.
A
This
is
it,
your
cult
has
gone
on
it,
loads,
the
composition
file
and
builds
the
docker
image
for
every
group's
in
the
defined
in
the
composition.
So
you
can
see
the
docker
build
lines
over
there.
A
A
The
test
starts,
tescon
sets
up
the
network
and
stop
the
containers
themselves.
The
instance
synchronized
with
each
others.
So
you
can
see
here
there's
a
different
version.
What
some
of
them
are
lost?
Overargo.
A
They're
they're
done
synchronizing
and
not
a
connect
with
each
others
and
they
ping
each
other
and
they
go
through
this
process
a
couple
of
times
and
at
the
end
you
can
ignore
the
error
message
at
the
top,
because
at
the
end
we
can
see.
Okay,
all
the
instances
are
happy,
so
every
version
return
okay
and
where
everybody
could
talk
to
each
other
right
and
you
can
see
the
outcome
which
is
well
hidden,
but
there's
a
outcome.
Equal
success,
outcoming
call
of
success
at
the
bottom.
A
A
A
We
have
a
cross
implementation
latest
which
runs,
go
lightest
and
host
latest
branch,
and
we
use
it
during
pull
request
too
so
we're
testing
this.
And
finally,
there
is
a
cross
implementation.
All
test
that
takes
that
starts
the
network
with
every
known
version
which
it
doesn't
right.
Now
it's
not
used
in
CI
funnel.
So
remember,
that's
all
that's
our
goal
and
we
got
it
we'll
show
The
Matrix
later
and
that's
not
all.
We
call
a
bug.
We
cut
a
bug
already
so
I'm
excited
because
that's
the
result
of
a
lot
of
work.
A
We
created
a
new
use
case
for
tescon.
We
implemented
many
test
run
improvements,
especially
stability
improvements.
We
explored
how
to
write
Intel
testing
and
how
to
build
a
maintainable
test
suit
with
dozens
of
versions
and
we've
got
a
bug
after
a
few
weeks,
a
few
weeks
after
the
release.
So
this
is
a
massive
positive
feedback
for
us
and
it's
great
because
we
caught
it
before
you
had
a
chance
to
get
released
right,
and
this
is
only
step
one.
A
A
If
you
use
test
gone
before
and
you're
still
traumatized
by
the
experience
of
setting
it
up
installing
it
and
the
lack
of
documentation,
it's
come
and
let's
have
a
chat.
The
interrupt
work
we
did
in
the
ipdx
team
showed
that
there's
gone
as
a
tremendous
potential
and
we
want
to
unleash
this
potential
right.
So,
let's
talk
because
we
want
to
improve
test
gone
and
we
want
to
make
it
more
approachable.
A
On
the
lips
side,
we
are
going
to
add
more
tests
and
share
interrupt
mattresses.
We
are
also
working
with
literally
a
lab
little
beer
lab
sorry
on
node.js
and
browser
support.
So
this
is
coming
soon
thanks
to
Glenn
from
the
team,
it's
a
good
time
to
get
involved,
because
we
basically
went
from
nothing
to
one
test
or
a
sum
test,
and
now
we're
planning
to
switch
from
some
tests
to
many
tests,
which
is
the
easy
part
right.
We
all
know
that.
A
Finally,
if
you
are
working
how
to
get
involved,
if
you
are
working
on
lipit2p
or
you
are
using,
it
come
and
take
a
look
at
the
P2P
test,
planar
repo
or
you
can
even
the
repo
and
not
test
for
your
own
implementation.
Same
thing:
let's
have
a
chat
if
you
want
to
do
it,
because
if
you
want
to
add
your
implementation,
because
we're
happy
to
help
and
finally,
the
best
way
to
get
involved
and
I'm
slightly
biased
here
is
to
join
the
ipdx
team.
We
observe
this
kind
of
shell
problems
and
test.
A
A
Yeah
is
there
any
questions.
Can.
B
A
Yeah
so
that
that
will
be
like
performance,
metrics
and
measurements,
so
that's
that's
something
test
ground
can
do
and
I
think
the
team
wants
to
do
it.
I
haven't
used
it
yet
so
I
cannot
talk
much
about
it,
but
Alex
is
using
it
for
a
telephone
for
performance
measurements
on
magma.
So
you
shall
definitely
ask
him
too.
C
Yeah
yeah,
so
what's
the
I
think
it's
called
testing
Cadence?
So
do
you
want
to
test
only
the
release
candidates
or
do
you
want
to
have
it
running
on
every
PR
that
gets
merged
Maybe.
A
A
So
we're
still
like
trying
to
figure
out
all
the
parameters,
but
basically
we
assume
we'll
have
we
want
to
be
about
to
run
some
tests
on
every
PR,
and
so
that's
what
we
do
with
the
two
small
tests.
We
have
Go
versions
and
latest
versions,
but
we
also
know
that
we'll
have
probably
larger
test
shoots
like
the
cross,
all
version
the
test,
every
known
versions
that
we
run
only
before
release.
D
Great
thought,
thank
you.
Do
you
have
any
plans
to
use
metrics
that
are
produced
from
Thunderdome
into
test
ground.
A
I'd
love
to
we
have
another
chat
with
a
problem
team,
so
I'd
love
to
set
this
up.
We
don't
know
how
we
can
use
this
yet,
but
we
know
like
eventually
we
want
to
use
this
right
because
is
great
at
setting
up
synthetic
networks
and
doing
like
conformance
testing,
but
when
it
comes
to
When
You
Reach,
like
performance
discussion
right,
you
want
to
set
up
actual
realistic
networks
and
so
yeah.