►
From YouTube: Testing P2P Applications with Kubernetes and Toxiproxy, an IPFS Testbed 2020-07-06 Weekly Call 🙌📞
Description
João Antunes presents IPFS Testbed, a testbed for IPFS built using Toxiproxy and Kubernetes.
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
All
right
so
hello
and
welcome
to
the
ipfs
weekly
call
today
is
Monday
July
6
2020.
My
name
is
Derek
and
I'm
your
host
for
the
month
of
July.
So
this
week
we
have
Joanne
tunas
with
us
to
present
his
work
on
ipfs
testfit,
very
nice.
To
have
you
with
us
draw
you're
gonna
want
to
take
it
from
you,
yeah.
B
Yeah
sure
so,
ok
Jimmy's
working
yes,
so
today,
I'm
going
to
speak
to
you
about
a
byproduct
of
our
master
cheese's,
which
has
been
the
ipfs
testbed
and
so
a
little
bit
about
me.
I'm
John,
tunes
I'm,
a
software
engineer
at
well.
The
software
engineering,
consultancy
and
I
was
a
master
student
that
technically
but
I've
successfully
finished
my
master
cheese's
and
again.
This
is
a
byproduct
of
that
and
there's
my
end
all
over
there.
B
B
But
while
we
were
working
on
it,
we
eventually
went
on
a
problem,
which
is
we
wanted
to
confirm
that
our
system
actually
performed
as
we
wished,
and
so
we
need
to
test
it
essentially
and-
and
we
came
up
with
a
series
of
requirements
and
we
wanted
something
that
was
easy
to
deploy
and
test
different
versions
of
it,
and
we
wanted
to,
of
course,
extract
relevant
usage
metrics
both
from
the
application
itself,
but
also
from
the
hosts
that
were
running
such
application.
Like
CPU
consumption
memory.
B
So
we
wanted
something
that
could
simulate
such
a
network
constraints
and
we
wanted
something
that
will
we
could
easily
test
out
on
a
local
system
but
also
scale
as
much
as
we
wanted
slash
as
much
as
we
could
given
our
budget
and
we
wanted
something
that
it
could
be
controlled
from
a
central
point.
Sorry,
we
could
instrument
it
and,
of
course,
perform
our
own
experiments
and
but
also
that
we
can
could
easily
automate
and
and
by
looking
to
all
of
these
requirements.
B
One
thing
that
pops
out
is
reproducibility
and
and
if
we
try
to
understand
like
again
from
the
requirements
and
looking
at
reproducibility,
is
just
essentially
and
given
the
the
same.
The
same
build
the
same
constraints,
and
we
wanted
that.
We
wanted
that
for
a
given
input
would
also
we
would
always
get
the
same
output,
of
course,
and
or
you
can
look
at
the
Wikipedia
definition,
which
is
way
better
than
mine.
But.
B
Historically
speaking,
reproducibility
is
usually
achieved
through
some
sort
of
utilization
and
doing
about
it
and
more
recently.
Of
course,
lot
of
people
rely
on
a
little
a
lot
of
way
of
utilization,
which
is
essentially
containerization
and
and
that's
what
we
winded
up
with,
and
so
we
went
with
docker
containers,
but
we
want
to.
We
need
something
that
would
allow
us
to
orchestrate
and
set
containers,
whether
it
be
locally
or
again
on
a
provider
and
the
workers,
the
reference
Orchestrator.
Nowadays.
B
And
the
next
step,
of
course,
is-
and
this
is
all
the
point
of
this-
is
of
course
to
extract
data,
and
our
tests
would
only
be
as
good
as
the
data
that
we
were
able
to
extract
and
because
of
it,
we
we
needed
something
that
was
able
to
provide
us
with
application
level
data.
So
I'm
specifically
about
a
poster
cast.
B
We
want.
We
really
want
to
check
this
ability
and
and
they're
like
there
are
a
lot
of
projects
out
there
that
simulates
some
sort
of
it.
So
a
really
example
of
that.
A
great
example
of
it
is
meaning
it
and
then
it's
kind
of
sibling
projects
which
are
container
net
and
maxing
it,
and
that
allow
you
to
do
like
a
lot,
but
it's
a
lot
more
than
what
we
actually
wanted
so
over
here.
We,
you
could
simulate
our
house
sorts
of
an
l2
network
configuration.
So
if
I
and
Connect
switches
connect
router
x'.
B
All
of
that,
we
that
there
was
way
more
than
what
we
wanted.
We
we
essentially
just
wanted
something
that
could
simulate
network
constraints
around
our
application.
We
didn't
really
want
and
what
were
the
the
network
configurations
underneath
there
and
given
that
we're
already
in
the
the
community
cycle
system?
If
you
look
at
projects
around
it,
we
have
the
network
mesh
projects,
it's
easier.
That
allows
you
to
inject
some
kind
of
faults.
B
However,
usually
these
faults
are
focused
on
the
l7
layers,
so
HTTP
layer
and
and
again
that's
not
what
we
really
needed,
and
so
we
eventually
came
to
find
a
really
useful
project,
which
is
toxic
proxy
and
by
the
folks
over
at
Shopify.
That
is
open,
source
and
pcp
proxy.
That
is
quite
thin
and
allows
you
to
configure
eight
a
series
of
network
constraints,
TCP
faults,
all
of
that
and
network
congestion,
that
pretty
lace
and
and
more
importantly,
it
also
exposed
an
HTTP
API.
B
B
Properties
essentially
and
finally,
of
course,
we
have
the
the
beats
to
collect
both
what
the
metrics
process,
the
log
stash,
storing
like
the
search
and
finally,
we
rely
on
key
banner
to
visualize
all
thing
so
yeah.
This
is
the
the
view
from
if
you're,
considering,
multiple
given
ninis
nodes
I'll
go
ahead
and
skip
this
part,
but
if
you're
interested
just
reach
out
to
me
afterwards,
essentially
it
just
describes
the
namespaces
that
we
relied
on
and
how
the
whole
thing
comes
together.
B
So
now
we
have
a
way
to
simulate
Network
constraints,
and
we
of
course,
still
need
way
to
control
these
from
a
central
point,
meaning
the
way
to
interact
with
it,
and
so
we
created
a
really
small
CLI
to
that
again,
if
you
remember
correctly,
interacts
with
DC
p
is
the
API
and
the
toxic
proxy
API.
So
the
idea
is
that
this
tool
allows
us
to
exact
comments
inside
execute
commands
at
each
of
the
ipfs.
B
So
essentially
over
here,
locally
I
have
already
running
five
ipfs
deployments
essentially,
and
what
I'll
be
doing
next
is
telling
the
logs
of
each
of
the
deployments.
This
is
running
locally
and
I'm
going
to
provide
an
over
here
on
on
how
this
was
running
afterwards
and
again.
What
we're
doing
over
here
is
just
essentially
from
a
central
point,
which
is
my
machine
I'm,
actually
creating
a
topic
on
a
specific
node,
so
this
guy
is
not
one
and
afterwards
I'll
be
moving
on
and
interacting
with
other
nodes.
B
So
this
is
the
CL
light,
so
that
I
was
telling
you
about
I'm,
just
interacting
interacting
with
it
and
locally,
and
we
can.
We
can
do
all
sorts
of
things
we
can
actually
and
not
only
run
post
the
COS
commands
as
we're
doing
over
here,
but
we
could
also
just
ping
other
nodes
or
even
actually
I'm
going
to
move
on
to
the
presentation.
B
Execute
a
series
of
bulk
commands
that
have
been
previously
described
on
on
some
sort
of
file.
Okay,
and
so
this
is
how
we
we
did
the
bulk
of
the
work
for
our
tests
that
I'm
actually
going
to
describe
afterwards.
So
this
essentially
covers
our
whole
problem.
So
always,
how
did
we
run
these
in
practice?
B
We
then
actually
extracted
data
on
the
on
this
side
and
created
graphs
afterwards
for
the
actual
work
and
but
yeah.
So
all
of
these
projects
are
are
else
open,
source
again,
I
love
my
Massachusetts
work
as
is
open
source,
so
it
feel
free
to
check
it
out
and
if
you're
also
curious
about
poster
cast
and
you
weren't
able
to
attend
the
last
presentation
and
also
because
it
was
cut
short,
but
the
the
project
saw
all
of
these.
B
B
A
B
Yeah
sure
so
I'll
just
keep
my
sharing
my
screen,
you
might
be
useful,
I,
don't
know.
A
B
Directly,
it
doesn't
so
out
of
the
box,
it
doesn't.
Essentially
it
is
possible
to
make
it
work.
It
wouldn't
take
that
much
of
a
deal
so
essentially
I'm
just
going
to
quickly
show
you
just
for
some
context
unless
you're
short
on
time,
so.
B
Again
we're
using
Elm
to
package-
if
you
remember
correctly,
listed
on
your
using
alpha
package,
our
application
and
and
what
this
is
sound
I,
don't
know
if
everyone
is
familiar
with,
but
it
essentially
allows
us
to
package
the
whole
of
these
resources
into
a
single,
essentially
a
single
deployment
or
a
single
release
better
set
and
and
over
here
we're
using
JSP
ifs.
But
this
is
essentially
a
matter
of
like
New,
Jersey,
proxy
okay,
so
over
here
we
have
the
manifest
for
the
our
deployment
or
Jessa
defense
deployment
and
changing.
B
This
would
be
a
matter
of
just
changing
the
image
that
is
being
used
and
it
can
actually
provide
it
through
configuration
and
and
essentially
changing
these
ports,
because
I
do
believe
that
the
alignment
on
go
ipfs
runs
on
different
set
of
ports,
but
that
that
would
be
basically
the
the
grass
for
the
change
that
they
need
to
take
place.
It's
it's
relatively
easy
to
again
use
other
things.
Other
than
Jessica
is
very.
B
B
B
You
have
a
series
of
things
such
as
delivery
guarantees
and
and
we're
going
to
see
that
your
data
is
then
it's
persistent
things
that
aren't
usually
associated
with
in
the
PGP
realm,
where
you
have
like
a
lot
of
pops
of
solutions
that
focus
essentially
either
on
real-time
communication
or
don't
provide
the
same
kind
of
guarantees
that
you
usually
haven't
centralized
realm.
So
this
is
why
we
came
with
with
all
circles.
So
poster
casts
is
a
topic
based,
observe
and
module,
and
that
was
actually
implemented
on
on
top
of
the
PGP.
B
That
focus
on
data
persistency
event
will
deliver
guarantees
and
ni,
scalability,
and
so
over.
Here
is
like
this,
essentially
over
inside
of
it
and
at
the
heart
of
it,
we
relied
on
the
concept
that
super
similar
type
FS,
which
is
in
the
Merkel
deck
and
and
the
way
we
went
about
it
is
we
have
these
two
core
resources,
the
topic
descriptor
in
the
event
descriptor
and
and
these
resources
link
between
each
other
so
event
descriptors,
actually
link
to
the
topic.
B
They
refer
to
and
every
festival
descriptors
on
the
event
stream
and
topic
descriptors
point
to
a
previous
version
of
said
topic
and
also
other
subtopics,
if
it's
relevant
for
them
to
actually
point
to
other
subtopics.
And
so
the
idea
is
that,
through
these
link
of
messages
and
you're
able
to
solve
you're
able
to
resolve
these
marker
links
up
to
a
point
where
you
may
have
a
missing
messenger,
or
you
weren't,
actually
even
part
of
the
network.
B
At
the
time
so
you're
able
to
reach
out
to
the
network
and
say
you
know
what
I
don't
have
these
even
the
Scriptures?
Could
someone
provide
it
so
it
ends
up
working
more
as
an
EXO
you're.
Not
even
acknowledging
that
you
haven't
received
the
content
you
should
have,
and
so
just
like
the
the
the
brief
overview
design
of
the
whole
thing
is
this:
the
message
is
rallying
together
and
the
message
is
also
linked
to
the
topics
and
the
topics
linked
to
a
previous
version
of
said
topic.
Descriptors.
B
So
because
we're
dealing
with
the
immutable
content
and
if
you
have
the
topic
foo
and
you
wanted
to
change
something
like,
for
example,
I,
the
new
subtopic,
you
would
need
to
create
a
new
topic,
this
critter.
But
then
you
could
point
to
the
previous
one.
So
essentially,
this
gives
you
like
a
nice
history
of
said
topic
and
and
finally
have
the
sub
topic
links.
B
If,
if
the
topic
makes,
if
the
top,
it
makes
you
mandatory,
of
course,
and
underneath
we
relied
on
the
the
Kadima
tht
provided
by
the
PTP
and
to
help
us
create
the
dissemination
trees
and
that
we
then
used
to
designate
content
for
each
of
the
topics.
I
think
that
essentially
gives
like
an
overview.
As
for
results,
we
were
able
to
actually
achieve
a
99%
of
subscription
for
payment
rate.
Essentially,
the
human
traits
tells
us
for
the
for
the
amount
of
subscribers.
B
We
have
to
topic
how
many
of
them,
although
subscribers
were
able
to
were
able
to
receive
the
content
they
were
subscribed
to
and
compared
to,
for
example,
floods.
I,
don't
have
the
results
over
here.
Actually,
yeah.
Sorry
I,
don't
have
the
results
for
floods
over
here,
but
the
results
were
quite
lower
and
the
way
we
tested
was
actually
with
a
an
open
source
data
set
of
Reddit
comments
from
2007
a
sample
of
approximately
225
thousand
comments
and
again
running
on
the
test
bed
that
we
previously
described.
B
A
A
B
Yeah
right
so
I,
don't
know
if
you
have
any
information
over
here.
Actually
I
thought
I
am
okay,
so
for
the
okay
for
did
for
for
the
whole
for
the
purpose
of
the
whole
experiment.
What
we
learned
about
was
we
created
essentially
three
different
experiments,
so
poster
casts
one
of
the
things
that
I've
actually
told
these,
because
this
company
is
immutable.
B
So
the
topic
descriptor
actually
allows
us
to
have
add
some
kind
of
configuration
in
it
and-
and
we
can
use
this
configuration
to
actually
tell
how
do
we
want
these
messages
to
be
linked,
so
you're
able
actually
enforce
a
set
of
fun
properties
such
as.
If
you
want
to
leave
a
grantee
or
the
delivery
guarantee,
you
can
do
it
so
again
by
configuring
set
topic
so,
and
we
went
about
with
three
different
experiments,
so
one
was
flat
sub,
the
other
one
was
pulsar
cast.
B
We
actually
struck
the
limits
of
our
test
that
essentially
community
started
shutting
down
a
couple
of
nodes
due
to
either
memory
usage
or
CPU
usage
and,
however,
even
still
compared
to
floods.
We
perform
better
and
providing
a
better
quality
of
service,
of
course,
and
and
the
way
we
brought
everything
together
was
actually
true.