►
From YouTube: Hydra - 2020-04-06 IPFS Weekly Call 🙌📞
Description
This week we have a presentation on Hydra: https://github.com/libp2p/hydra-booster/
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
B
Right
so
our
friends
and
made
a
circle
matrix
tartaric
are
organizing
a
virtual
meet
up
to
this
is
kind
of
meet
up
that
everyone
can
join
and
they
can
join
from
anywhere.
The
title
of
the
video
is
open,
tech
will
service
and
there
will
be
multiple
talks.
We
will
have
Matthew
to
talk
about
matrix
and
what
they
have
been
up
to
with
regards
to
using
matrix
and
multiple
use
cases
and
how
it
works,
and
so
on.
We'll
have
Saul
about
her
J
I'm,
pretty
sure
I'm
pronouncing
it
incorrectly
I'm.
B
So
sorry,
if
you
watch
this
recording
to
tell
us
about
how
about
GT
and
now
GT
has
been
growing
rapidly
and
now,
GC
works
and
all
GC
GC
enables
like
peer-to-peer
video
calls
I'll
also
be
speaking
and
I'll
be.
Introducing
the
gossip
gossip
Surrey
is
a
router
of
the
pub/sub.
Sorry
is
one
of
the
web
server
implementations
that
is
available
for
the
peer-to-peer
pub/sub,
and
we
have
been
doing
a
lot
of
work
to
make
it
more
resilient,
make
it
more
robust
attacks.
B
You
can
consult
all
of
these
new
updates
on
the
P
dot,
one
on
one's
back
and
yeah
I'll.
Be
introducing
that
and
Exce
funny
how
it
works.
It's
gonna
be
fun
and
then
last
one
of
the
lists,
we
will
have
one
more
person
from
the
matrix
community.
Beware:
I
will
present
you
ex
patterns
in
how
to
provide
a
good
UX
for
users
that
have
to
deal
with
keys
so
that
they
can
have
end-to-end
communications.
So
yeah,
as
I
said
like
this,
is
a
free
meet
up.
Everyone
is
welcome
to
join.
B
B
B
C
I've
shot
an
email
off
to
Oliver
and
ICC
to
Stephen,
but
I
do
not
know
whether
it
will
be
received
in
time.
B
D
D
D
D
The
other
thing
about
the
DHT
boost
the
Hydra
booster
is
that
we've
we've
added
a
method
of
generating
peer
IDs
such
that
they
are
evenly
spread
around
the
DHT.
Previously,
the
old
DHT
boosters
were
generating
random
peer
IDs
and
there's
no
guarantee
that
we'd
blow
they
are.
They
are
nicely
nicely
placed
in
the
DHT.
So
a
lot
like
when
you
learn
when
anyone
one
of
the
HT
queries
they
might
not
run
into
one
of
these
hydraboost
Hydra
bases.
So
that
is
we're
trying
to
address
that.
So
this
is
these
the
Hydra
boosters.
D
What
we
have
at
the
moment
is:
we've
deployed
a
bunch
of
them.
There
are
five
hydras
and
we've
named
them.
We've
named
them.
I,
don't
actually
can
see,
see
this,
but
this
is
a
simple
bubbles,
Trumpy
domino
and
you
kid,
and
each
one
of
those
guys
has
a
different
amount
of
heads,
we're
just
testing
them
out
at
the
moment.
D
You
can
see
on
this
graph
that
as
Alice
evil
has
25
heads
bubbles
has
50
heads
company
has
a
hundred
heads
domino,
has
150
heads
in
euclid
has
200
heads,
so
it's
pretty
rad
and
on
this
graph
up
here,
we've
got
the
current
current
connected
peers.
So
this
is
across
all
Sybil's
and
all
hydras.
The
total
connected
peers,
we're
rockin
around
600,000,
connected
peers
in
the
DHT
Network,
which
is
pretty
cool
and
we
can
break
it
down
by
each
each
Hydra.
D
And
then
we've
got
a
nice
little
drop
down
here
and
we
can
actually
select
a
particular
head
of
the
Hydra
at
the
Civil
that
most
of
what
we're
calling
it
and
the
Hydra
to
further
narrow
it
down,
which
is
kind
of
cool.
You
can
see
that
obviously
the
ones
that
have
more
heads
have
or
total
connections
that's
kind
of
cool
routing
tables
is
a
similar
sort
of
thing,
but
it's
the
it's.
Basically,
if
you
know
anything
about
the
DHT,
it's
the
key
bucket.
Well
in
our
case,
is
the
k
bucket,
where
peers
are
placed.
D
The
so
this
this
graph,
just
here,
shows
the
unique
pair.
So
this
is
you
gotta.
Take
this
a
little
bit
infinite
result,
because
each
one
of
these
could
they
could
have
seen
the
same
peers
essentially,
but
this
one
is
what
this
like
weirdly
bubbles,
it
doesn't
have
as
many
heads
as
Euclid
has
seen
more
unique
peers
in
the
DHT,
which
is
kind
of
fun,
and
that's
so
he's
seen
around
was
thirty
one
thousand
unique
peers
in
the
DHT,
which
is
cool,
which
is
not
it's
not
everyone,
I
guess:
yeah
CPU
usage
ticking
along
provider
records.
D
We've
currently
got
around
eight
million
in
the
store
for
Euclid,
which
is
pretty
cool
and
I'm,
currently
working
on
a
feature
whereby
Hydra
Sybil's
will
actually
when,
when
they're
asked,
if
they
have
a
provider
record
for
a
particular
cid,
if
they
don't
have
it,
then
the
civil
is
going
to
actually
proactively
go
and
fetch
that
record.
So
it
does
have
it
for
next
time.
So
that
means
so
what
I?
What
I'm
imagining
on
see
soon
is
that
we
will
have
many
many
many
more
provider
records
in
the
store.
D
So
that's
that's
good
news,
because
people,
because
then
we'll
be
able
to
respond
to
more
queries,
so
that's
kind
of
cool.
The
rest
is
kind
of
like
no
to
monitoring
or
virtual
machine
monitoring.
Memory
wishes
remember
per
connection
which
is
which
is
pretty
low,
which
is
great
CPU,
so
memory
usage
we
that's
superb,
like
considering.
We've
got
like
thousands
and
thousands
of
connected
peers
where
we're
kind
of
good.
We
got
around
ten
to
fifteen
on
the
on
the
biggest
node
we've
got
that
can
go
up
to
like
twenty
right
now.
D
So
you
know
they
like
it's.
Not
it's
not
right.
It's
not
something
that,
but
what's
really
cool
is
that
we've
got
it's
just
like
it's
running
on
Google
cloud,
it's
running
on
kubernetes
we've
got
a
single
VM
hosting
two
to
hydra's,
and
so
we've
got
free
premiums
hydras
on
each
essentially
and
they're.
Each
one
of
those
hydras
is
running
like
five
hundred
ipfs
nodes,
essentially
which
is
kind
of
cool,
because
you
can
just
do
that.
D
You
know
right,
which
is
good,
so
I
think
my
mum's
decided
to
find
me
while
I'm
doing
this
cool
crate,
so
yeah
open
file,
descriptors,
not
lemony,
it's
still
good
and
you
can
see
that
we've
got
with
spending,
probably
spending
too
much
time
with
garbage
collection.
That
was
a
10%
previously
but
I
below
it.
D
There
recently
by
around
2%,
by
the
looks
of
it,
which
is
kind
of
cool,
but
that's
Hydra
Buse
is
nuts
that
the
control,
the
bigger
fire
we
have
for
monitoring
them,
like
the
one
of
the
good
things
about
this
is
that
this
project
has
now
like
we've
got
proper
ownership
for
it.
We've
got
these
metrics,
which
I'm
showing
you
now,
which
we
didn't
previously
have
well
I
mean
we
had
like
regular
kind
of
node
like
rpm
monitoring
stuff,
but
we
didn't
have
this
insight
into
the
you
know
the
unique
peers
and
the
routing
table
sighs.
D
D
D
E
D
Ideally,
no
like
we
want
to
be
like
thing
is:
if
it,
if
it
could
help
you
or
net
you'll,
never
even
I,
guess
you
could
we're
running
them
to
ensure
that
all
of
the
ipfs
network
has
really
good,
really
good
DHT
response
times
and
and
stuff
that
yeah.
You
should
like,
unless
you've
got
like
a
big
application
with
thousands,
thousands
new
users
and
you
put
your
own
implementation
with
like
a
version
of
the
th.
Even
then
no
I
wouldn't
expect
you.
It
you'd
have
to
well.
E
B
Either
booster
actually
like
giving
extra
praised
our,
and
I
actually
the
way
it
is
deployed,
is
all
the
committed
on
the
itself
still.
Actually,
if
you
could
go
there,
just
like
greet
those
dogs
and
I
tell
as
it
doesn't
make
sense
like.
If
you
buy
reading
those
dogs
will
be
able
to
avoid
yourself.
That
would
be
a
great
piece
of
feedback.
A
the
the
TLDR
is
like.
B
If
you
you
should
like
run
on
Rhode
Island,
Reds,
addres
notes,
essentially
until
we
have
the
next
version
of
my
previous
release
and
feel
we
played
got
to
use
the
next
version.
Until
we
finish
all
the
steps
because,
like
I
still
has
some
features,
features
that
we
are
working
on,
perhaps
not
like.
You
will
not
get
any
significant
result
out
of
it.
B
That
sin
either
is
designed
as
kind
of
like
a
service
to
augment
the
quality
of
service
or
as
a
tool
to
augment
the
quality
of
service
of
the
network,
and
so
anyone
can
run
a
Hydra
and
that
should
only
increase
the
quality
of
service
of
the
network,
and
so,
if,
if
for
some
reason,
you
think
that
the
network
is
not
good
enough,
you
can
like
run
more
address.
That's
it.
B
We
are
also
going
to
be
proactive
about
it
and
like
see
as
a
network,
expansion
and
shrinks
and
kind
of
a
cut,
just
the
number
of
eyebrows
that
we
have
running
but
like
from
the
beginning
from
the
design
phase.
We
wanted
to
have
a
node
that
could
provide
this
service
illumination
without
hard
coding,
any
piece
without
like
having
to
tell
I
profess
notes,
dial
to
the
specific
nail.
B
E
D
Expect
I'd
expect
that,
if
you're,
if
you're,
if
you
want
to
help
out
with
ipfs
in
general,
then
yeah
absolutely
run
them
like
we're
we're
running
them,
but
because
we
want
to
help
everyone.
Who's
is
on
the
IP
of
his
network,
but
if
you're
building
your
app
specifically
on
IP,
offense,
you're,
probably
not
focused
on
making
the
network
a
whole
a
bit
better
for
everyone
in
network
I,
do
just
want
it
better
that
you're
at-
and
this
is
not
not
that
this
is
like
help
him
everything.
D
F
D
The
moment
we've
got,
we've
got
multiple
hydras
and
the
peers
that
they
generate
that
peer
heads
that
they
generate
are
balanced
for
each
like
between
hydras,
but
not
each
Hydra,
but
not
overall.
So
what?
Ideally,
we
kind
of
need
like
as
a
service
which
is
like
give
me
like
that,
remembers
peer
IDs
that
it's
generated
and
keeps
them
balanced
for
all
of
the
hydrants.
D
All
the
hydras
gets
spread
like
proper
spread,
because
at
the
moment
we
might
I
have
one
major
which
is
spread
nicely,
but
then
another
fighter
might
overlap
that,
instead
of
also
being
spread
nicely.
So
so
this
improvements
to
doing
it.
That's
kind
of
on
that
on
that
on
the
roadmap,
but
yeah
okay,.
D
B
B
A
B
A
Log
says
the
network
whose
20
is
just
like
it
said
about
the
number
of
buckets
you
expect
to
be
full
I
think.
Usually
this
is
around
at
most
200
peers.
If
you
have
200
pairs,
you
should
be
good.
The
mm
just
means
that,
like
basically
like
connections
are
like,
they
cost
you
in
terms
of
resources,
but
not
too
much
and
it's
more
expensive
to
recreate
those
connections
and
then
to
just
like
keep
them
around.
So
we
keep
we
arrow
inside
lucky
or
connections
because
it
means
like,
if
you
know
the
needs
neck
connected
future.
A
The
you
already
connected.
If
you
ask
a
random
bits
like
if
you
make
a
big
round
bits
awful
question,
you
don't
know
where
some
kind
it
is
there's
a
chance
that
they'll
have
the
data
you
looking
for
already
yeah.
So
that's
usually
what
we
we
tried
not
to
kill
keys
unless
we
need
to,
but
the
minimum
is
more
like
200
to
2,000.
D
Oh
sorry,
sorry
that
the
mm
is
because
we
have
the
type
the
high
and
low
water
marks
for
the
connection
manager
set
at
1500
and
2000
I.
Think
so
that
when
it
gets
up
there,
so
try
to
retain
connections
up
to
low-water
mark
and
then
when
it
gets
above
2000,
you
will
bring
them
off.
It's
not
a
good
number
for
desktop
use
and
if
you
download
and
install
I
bet,
if
s
desktop,
it
actually
comes
with
better
connection
manager.
Defaults
for
for
business.
G
A
B
A
More
resources
but,
like
my
own
computer,
I,
think
I
default
alike,
so
the
a
critique
of
hope
does
not
mm.
The
actual
default
is
nine
hundred
and
six
hundred
six
hundred
low
nine
hundred
hi.
My
node
is
well
actually
but
I.
Think
it's
more.
Like
yeah
mine
I,
said
at
seven
thousand
there
are
seven
thousand
two
hundred
as
high
in
several
thousands
low.
This
is
my
laptop.
It's
actually
not
a
problem
in
practice,
just
because
I'm
not
running
a
TV
server.
It's
like
you're
running
to
each
the
client
mode.
A
G
For
these
hydras,
are
they
they're
taking
in
BHT
records?
You
know,
I've
talked
to
anything
with
you
guys
about
and
pass
a
little
about
how
we
just
have
too
many
provider
records
to
announce
in
one
cycle
before
you
know
that
twelve
hours
is
up
well,
this
still
provide
performance
benefits,
even
if,
like
the
records
are
announced-
and
we
have
those
records
that
are
willing
to
serve
them.
People
just
don't
know
about
them.
Yet.
B
It
will
provide
enormous
benefits,
but
it's
just
like
because
these
nodes
will
be
highly
connected.
Highly
available.
Public
IP,
like
fast
machines
right
to
your
nodes,
will
be
able
to
like
dial
like
one
day
that,
when
I'd
rather
will
like
have
a
fast
response
cycle.
But
but
that's
like
the
optimization
you
get
like
you,
don't
get
like
a
smart
optimizations.
That
jazz
here
is
a
bucket
of
records
now
distributed
yourself,
which
will
be
just
like
a
more
efficient
way
to
build
records,
especially
for
our
devices
like
resource
device
with
OS
resources
and
so
on.
B
We
we
have
been
discussing
like
ways
to
make
provides
faster,
yeah
like
there
is
no
specific
design
details
on
the
Hydra
to
provide
that.
Yet
it
might
you
might
just
come
down
like
the
simplest
thing
to
do
is
to
have
an
option
where,
when
an
activist
note
notices
that
the
user
is
adding
a
lot
of
data
to
just
like
bring
up
the
hey
like
it
seemed
like
you're,
letting
a
lot
of
data
and
you're
going
to
provide
a
lot
of
records.
B
B
Another
another
thing
that
okay,
so
this
is
like
one
one
thing:
another
thing
that
either
I
can
also
help.
You
do
the
provides
this
because,
because
they
are
so
well
connected,
when
you
do
the
the
fine
Pierce
call
to
find
the
best
providers,
you
will
actually
need
to
I
do
less
hops
in
the
network
to
find
that
the
final
destination
for
the
records,
so
you
can
actually
get
another
performance
of
the
musician
there.
So
there
is
like
two
of
them.
Anything
else.
I
think
that
there
is
a
hand
for
Marilyn
go
ahead.
D
B
Yeah
like,
for
example,
I
guess
for
your
use
case,
Matt,
what
you
might
be
thinking
is
running
your
own
Hydra
and
I
am
having
you're
pinning
servers
to
point
to
the
other
nodes.
I
was
like
the
place
to
put
the
records
when
you
add
our
up
data
and
like
just
ignore
across
the
network,
because
then,
when
your
nose
will
be
looking
for
those
records,
they
will
find
that
address
at
us
and
so
for
your
nodes
that
are
cleaning
your
data.
A
Like
basically,
the
main
thing
here
is,
they
did
like
hydras
help
prop
up
the
network
when
you
have
like
problems
with
the
DHD
part
of
like
the
gut
for
survival,
is
also
to
like
reduce
those
problems
of
the
DHD,
and
all
that
together
should
just
produce
the
matter
how
it
takes
actually
providing
this
record.
We
shouldn't
get
faster
to
provide
everything.
The
next
step
there
is
like
parallel,
provides,
there's
been
on.
A
This
then
there's
like
yeah
back
provides,
which
aren't
very
possible
and
we've
considered
doing
we're
like,
instead
of
like
doing
a
normal
teach
decoy
for
each
record,
you
kind
of
like
walk.
Basically,
you
sort
the
records
by
where
they
would
go.
The
DHT
you
can
just
walk
around
the
DHT,
giving
back
updates
every
single
node.
So
basically
you
have
more
records
than
there
are
knows
the
DHT.