►
From YouTube: 🚀IPFS Core Implementations 2020-04-06 🛰
Description
Meeting notes: https://github.com/ipfs/team-mgmt/issues/992
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
A
B
It
is
in
the
works.
We
are
looking
to
launch
that
on
Tuesday,
so
we're
just
doing
a
bunch
of
preparation
for
that
awesome,
testing
and
yeah,
and
release
prep
and
release
announcements
and
migration
notes,
and
all
of
that
good
stuff
is
in
the
works
this
week.
So
we're
working
on
getting
that
out
for
Tuesday
next
Tuesday,
not
tomorrow,.
B
For
that,
as
I
mentioned
about
five,
so
we're
prepping
for
that,
and
then
we
are
also
formally
kicking
off
the
work
406
tomorrow
and
so
we'll
start
working
on
that
any
patches
405
will
definitely
take
priority,
but
we're
gonna
start
working
on
that
and
we'll
start
rolling
with
our
six-week
release
cycle.
So
we
can
start
working,
that's
what
we're
gonna
be
working
towards,
is
6-week
releases
and
then
we'll
just
be
cutting
stuff
so
that
people
get
the
things
sooner
and
yeah.
B
A
C
Yeah
I
can
take
that
so
hydraboost
is
now
this
sorry.
It's
been
a
while
since
I've
been
here,
so
we're
just
sort
of
putting
everything.
I
know
hasn't
been
announced
yet
in
here,
but
the
motor
abuse
is
now
proactively
fine
providers.
It's
asked
about
so
being
a
DHT
participant.
It
gets
asked
like
who
has
this
all
the
time,
and
so
so
what
it
does
is
it
if
it
doesn't
know
the
answer,
then
it
goes
and
looks
it
up
and
then
stores
it
so
that
next
time
it's
asked
it
nodes.
So
that's
super
cool.
C
The
peer
IDs,
so
hydras
have
many
many
many
many
many
many
heads
peter
added
balanced
peer
ids.
So
when
we
generate
what
what
it
does
is
it
generates
parodies
randomly.
But
what
we
don't
want
to
happen
is
when
that
generation
happens,
for
those
peer
IDs
end
up
like
next
to
each
other
in
the
same
bucket
sort
of
thing
so
they're
actually
generated
so
that
their
balance
in
the
tree
and
that's
great,
but
it
was
only
happening,
put
a
Hydra
and
we
run
multiple
hydras
because
of
the
limitations
of
machines.
C
You
can
only
run
like
say
you
know,
200
or
so
heads
per
Hydra
before
things
start
like
falling
over.
So
balancing
them
on
one
Hydra
is
great,
but
you
could
have
overlap
another
potras.
So
now
what
we've
done
is
we
have
one
Hydra,
ID
gen
server
and
the
other
hydras
in
the
Hydra
swarm.
Ask
that
ID
gen
server
to
generate
a
pair
ID.
So
all
of
the
pair
ideas
that
are
generated
are
balanced
across
all
hydras.
C
So
that
is
good
news
and
other
stuff
is
the
whole.
A
big
part
of
Hydra.
Is
that
there's
a
big
belly
where
all
of
the
data
is
stored?
All
of
the
provider
records
and
all
of
the
Hydra
heads
share
that
data
and
and
that's
good
again,
the
Hydra.
But
if
we've
got
multiple
hydras,
then
like
we
might
be
asked
for
something
one
fighter
might
be
asked
for
something
that
is
another
hydras
belly.
So
what
Isabelly
one
belly
to
rule
all
the
hydras?
C
That's
just
kind
of
weird.
You
know
what
I
mean
it's
so
what
we
did.
He
had
this
like
one
store
with
all
of
the
data
in
it
that
all
of
the
hydras
can
can
get
data
from,
and
we've
been
working
on,
getting
the
there's,
a
old
sequel
datastore
for
go
ipfs
that
Jeremy
created
a
long
time
ago
that
I've
been
fixing
up.
It
now
passes
all
the
ghost
datastore
tests
and
it
now
streams
results
instead
of
buffering
them
all
into
memory.
C
So
that's
good
I
then
tried
to
use
it
and
ran
into
a
number
of
other
problems
like
seg
faults
and
deadlocks,
so
I'm
currently
working
through
solving
that,
but
once
that's
done
we'll
have
a
persistent
datastore
that
is
shared
by
all
hydras.
So
that's
super
good
news.
The
final
thing
that
happened
was
that
we
moved.
C
We
were
hosting
it
on
Google
cloud,
but
turns
out
sending
data
across
the
Internet
is
cost
a
lot
of
money
in
Google
cloud
who
knew
we
changed
so
changed
as
we
moved
them
to
distillation
and
that's
made
a
whole
lot
easier
now
because
of
the
kubernetes
configs
are
saved
in
the
repo,
so
that
we
can
just
put
it
here
and
put
it
somewhere
else.
We
don't
like
distillation
put
it
wherever
you
know.
C
What
needs
to
happen
there,
and
next
up
I,
wanted
to
improve
the
metrics
for
finding
providers.
You
can
see
a
picture
there
of
what
we
now
have
we're
able
to
know
like
how
long
it's
taking
us
to
find
provider
records
in
the
network.
Like
look
so,
we
can
look
at
like
stuff
that
we
already
have
stuff
that
we
found
in
the
network
stuff
that
we
don't
have
and
isn't
found
in
the
network
and
then
the
other
like
orange
one
you
can
see
in
that
screenshot
is
like.
C
We
get
asked
the
same
time
for
the
same
cid
like
who
has
this
the
ID
like
lots
of
times.
So
we
like,
we
discard
some
requests
for
content,
so
we're
doing
a
lot
of
discarding
like
I
need
to.
We
need
to
figure
out
a
little
bit
more.
Why?
Why
we're
having
to
do
that
and
what
could
be
done
about
it?
So
we've
got
some
good
metrics.
D
So
when
you
set
up
a
subdomain
gateway
that
subdomain
gateway
will
still
accept
requests
for
the
old-school
paths.
So
if
you
ask
subdomain
gateway
for
slash
ipfs,
something
it
will
return
a
redirect
to
the
proper
resource
on
the
subdomain,
and
we
did
that
for
files.
The
problem
was
the
directory
listings
were
created
by
a
different
code,
and
that
was
not
redirected.
So
it
stayed
until
you
clicked
on
the
file
and
that
file,
and
then
you
got
redirected
to
a
subdomain.
So
now
we
fix
that
and
everything
from
the
get-go
gets
redirected
to
subdomain
it.
D
It
was
not
really
a
security
problem
because
we
control
how
direct
the
HTML
responsible
for
directory
listings.
However,
like
it
looked
weird,
so
that's
fixed
I'm,
not
sure
if
it's
in
our
c2
or
it
will
be
in
the
next
one,
but
but
that's
the
main
fix
and
something
that
already
landed,
but
I
did
not
mention
before
and
probably
like.
We
sneaked
that
intuitive
as
0.5,
but
I,
don't
believe
it
was
described
anywhere
outside
of
the
PR
and
I
realized.
D
It's
part
like
very
important
part
of
the
CID
p1
immigration
was
supporting
case-insensitive,
IP
NS
names,
and
now
we
support
that
both
in
the
gateway
and
also
account
in
the
command
line,
so
in
the
command
line.
When
you
let
resolve
the
old-school
identifiers,
this
is
base
58,
it's
case
sensitive.
It
can
point
at
arbitrary
content
paths.
However,
we
are
not
able
to
put
that
in
subdomain,
so
we
support
representing,
pin
write
this
a
CID
v1,
so
we
can
put
them
in
subdomains.
D
The
problem
is
it's
it's
using
a
different
multi
codec,
so
it
like
it's
like
self,
describing
that
it's
not
a
Content
like
it's,
not
an
identifier
from
regular
ipfs,
half
its
representing
little
peaky.
So
what
happens
when
you
use
CID
v1
with
different
multi
codec
in
this
case,
is
doc
to
be
which
you
will
get
when
you
convert
this
old-school
thing
to
the
new
thing,
using
the
standard
command.
We
will.
D
You
will
get
this
useful
error
that
hey
this
PID
is
represented
as
CATV
one,
but
it
has
a
wrong
multi
codec
and
in
the
same
message
we
already
did
the
proper
conversion
and
it's
ready
for
you.
So
if
you
retry
with
that
one,
you
will
properly
resolve
IP
and
s
name-
and
you
can
see
like
you,
but
can
probably
won't
see
this.
But
the
only
difference
is
here.
This
is
doc
baby
and
here
it's
little
piggy,
but
I
think
it's
very
useful
thing
to
know
that
we
handle
this
on
the
command
line.
D
There
is
this
useful
error
message
and
on
the
Gateway
you
don't
even
notice,
because
when
we
redirect,
when
we
redirect
a
sub-domain,
the
multi-core
deck
is
automatically
fixed
up,
so
I
think
it's
useful
to
know
that
it
is
there.
You
can
use
not
only
ipfs
in
subdomains,
but
also
I
pns
identifiers
and
next
would
be
adding
support
for
subdomains
to
J's
ipfs.
D
C
D
Like
the
problem
is
when
you
pass
the
CID
to
that
command
like
ipfs
CID
base,
42
right
when
you
pass
a
CID
there's
no
way
to
tell
if,
if
it's
like
dag
PB,
is
that
an
error
or
is
that
what
I
really
want?
You
would
have
to
pass
the
full
path
like
/I,
P
and
s
and
then
CID,
but
that's
like
something
most
of
people
want
to.
Do
they
just
copy
just
the
CID?
So
that's
why
we
provide
this
useful
error
message.
D
So
it's
it's
not
a
big
blocker,
but
people
immediately
learn
that
oh
there's,
this
multi
codec.
If
I
want
to
do
this
more
often
I
need
to
account
that
in
my
workflow
there
is
also
a
in
the
ipfs
CID
command
itself.
You
can
print
CID
v-0
as
CID
v1
and
manually
specify
the
multi
codec.
So
that's
already
in
the
command
like
it
will
already
only
the
shipping
that
so
I
don't
think
anything
else
is
needed.
Shouldn't
see.
C
D
So
I
think
that's
a
decision
we
need
to
make
around
them
the
moment
we've
flipped
a
switch
for
the
ipfs
right.
So
if
we
change
the
output
of
all
the
command
line
tools
to
base
42,
we
may
do
the
same
for
peer
IDs,
just
for
the
sake
of
like
having
uniform
representation
of
everywhere,
but
that's
like
probably
four
than
0.6
or
further.
F
F
There
was
a
related
problem
with
how
we
were
managing
managing
connection,
so
we
just
kind
of
done
a
refactor
to
move
that
into
the
networking
layer.
So
it
simplifies
the
rest
of
the
code.
So
hopefully
I've
got
a
little
bit
more
work
to
doing
connection
management.
Hopefully
that'll
be
finished
by
today.
G
The
last
piece
remaining
was
to
put
together
the
ability
to
run
multiple
chunker's.
At
the
same
time,
one
of
the
other,
without
penalizing
anything
so
subject,
runs
in
other
go
routines,
but
because
we're
dealing
with
the
stream
of
discuss
video
samples
correctly
in
order
after
that-
and
it
took
a
very
arduous
rewrite
of
how
all
of
this
is
being
put
together,
which
is
actually
done,
it
has
been
done,
seems
like
late
yesterday,
unfortunately,
I
have
failing
tests
that
only
fails
undergo
test.
G
It
works
perfectly
on
like
if
I,
if
I
run
the
very
same
thing
with
on
the
command
line
and
the
command
line.
Just
just
does
everything
do
I
expect
the
moment
you
go
go
test
for
test
I
had
written
before
it
blows
up
with
actual
memory,
errors
and
stuff
like
that,
so
I
am
tracking
this
down
as
fast
as
I
can,
unfortunately,
none
of
the
go
race
tools
or
instantly.
G
These
are
useful
because
the
moment
you
slow
down
your
execution,
everything
goes
away
problem
so
I'm,
basically
listening
through
the
code
and
commenting
section
after
section
once
you
find
it
and
I
am
actually
at
the
spot,
where
I
would
want
a
couple
of
queries
interested
to
attend.
Basically,
a
can
like
a
aux
review
session
too,
before
we
open
it
up
to
the
to
the
wider
community.
G
I
am
hoping
to
schedule
this
for
tomorrow.
Please
leave
a
note
in
the
in
the
crypto
pad.
If
you
want
to
attend
and
I'll
figure
out
how
to
how
to
schedule
this
around,
it
will
be
like
half
an
hour
or
just
showing
you
what
what
it
does
a
little
bit
of
background,
why
we
are
where
we
are
and
then
any
input
you
know
on
what
is
weird
or
what
is
awesome,
and
that's
that
so
I
have
a.
H
I
Yeah
I
will
be
fairly
quick.
We
finished
grant
phase
one
and
if
you
look
in
the
crypt
pad,
you
can
see
the
full
roster
of
HTTP
API
endpoints
that
we
support
now,
in
addition
to
just
the
underlying
rust
API
is
that
perform
up
functionality
and
we
also
got
the
ipfs
name
on
crates
IO,
so
soon
you'll
be
able
to
include
that
in
your
Russ
project
or
cargo
install
ipfs,
which
is
pretty
neat.
I
We
want
to.
You
know
where
our
intent
is
to
apply
for
another
grant
round,
to
get
more
functionality
in
and
move
towards
basic
UNIX,
FS
and
Gateway
functionality,
as
well
with
the
idea
that
we
can
like
launched
the
first
rust
gateway
or
something
cool
note
is
that
yonas
from
the
team
figured
out
a
way
to
have
the
compile
time,
which
was
becoming
pretty
long.
So
that
was
cool
and
then
for
requests.
I
just
have
the
list
in
the
crib
pad
of
the
jeaious
ipfs
pull
requests
that
need
I,
guess
attention
or
whatever.
A
Well,
I
will
get
some
those
POS
something
I've
done
in
order
to
as
well
to
help
you
guys
out
and
I'm
still
about,
as
well
as
we've
had
a
refactor
on
the
cards
for
the
UNIX
FS,
import
or
a1,
because
it
kind
of
uses
the
internal
IP
LD
instance
chair
so
I
confess
my
view
like
today
to
only
use
the
block
AP
on
that
the
public
API.
That's
super
helpful
yeah.
A
Yes,
you
should
be
able
to
use
that
to
experiment
with
all
kinds
of
weird
used
to
turret,
dive
structures
and-
and
it
helps
us
to
because
we
used
to
double
serialized
blocks
because
of
quirk
of
how
my
code
is
structured
and
that
doesn't
happen
anymore.
So
it's
a
little
bit
faster
and
if
I
can
play
with
it,
I
was
waiting
for
reveal.
A
J
Everyone
so
in
piercer
improvements
well
milestone.
One
with
the
address
book
in
beretta
book
is
not
closed.
Opr
is
over
murder,
Linda,
just
between
0.25
release
branch,
which
is
school.
The
second
milestone
with
the
period
for
removal
from
all
the
codebase
in
the
API,
it's
in
a
good
shape.
Now
all
the
needed
PRS
are
reviewed,
Jacob
already
reviewed
in
merged
PRS
for
the
interface,
so
the
other
content
elements
will
go
for
the
view
now.
J
And
finally,
this
week,
I
will
start
also
working
on
the
implementation
of
the
milestone
tree,
which
will
include
peer,
start
persistence
for
the
address
book
in
the
brother
book
and
then
in
the
future.
The
mouse
one
for
will
integrate
key
book
with
the
persistence
also
for
Linton
keys,
and
that's
it.
A
Who
non-cancellable
requests
you
Jaso
PFS
I'm
working
myself
a
moment?
You've
got
Piazza
at
the
moment.
Well,
so
there's
been
merchants,
the
importer
I
killed
in
a
block
service
and
that's
basically
just
passing
so
we're
using
them.
Like
said
correctly,
this
has
this
idea
of
the
contact
and
the
context
as
a
deadline,
and
you
can
expire
sexually.
So
we
have
any
Jeremy
has
bought
controllers,
which
ever
bought
signal.
A
Not
every
API
request
will
respond
to
it,
because
some
synchronous,
some
just
don't-
have
anything
to
counsel
the
ones
that
the
first
two
that
I
want
to
get
to
Council
is,
if
you're
requesting
a
CID
that
you
don't
have
in
your
we
play.
So
it
gets
added
to
that's
what
oneness
and
if
you
cancel
that
request,
this
should
vanish
from
the
bids.
What
oneness?
What's
the
first
thing?
A
The
other
thing
is
limited
piece
with
your
opening
connections
to
other
mates
and
the
request
gets
canceled
those
conventions
of
thunder
once
those
are
those
two
things
happen
and
in
accord,
but
first
when
I'm
done
and
ship
it
and
I
will
be
super
exciting.
What
I've
canceled
the
question,
or
mostly.
E
Hearing
your
explanation,
I'm
not
sure,
if,
like
turning
all
the
connections
that
were
already
hoping,
is
a
good
strategy
just
because
it
was
a
lot
of
turn.
You
probably
want
to
cancel
all
of
the
asks
to
open
for
more
connections,
so
anything
that
is
on
the
queue
to
like
dial
just
like
canceled
those
but
I
call
the
ones
that
you
have
already
opened
that
are
a
part
of
the
routing
table
or
if
you
like,
change
already
like
other
protocols,
just
like
keep
them
around,
because
it's
something
that
you
have
like.
E
It
might
be
the
case
that
you
had
to
close
previous
connections
to
open
those
right,
and
so,
if
now
you
close
the
new
ones,
then
you
will
run
out
of
connections.
You
can
just
a
cube
anyone's.
You
know,
I,
just
call
it
like
the
new
state.
So
this
is
like
my
my
observation.
Maybe
I'm
wrong,
maybe
like
there
is
an
analogy
that
price.
E
A
Said
it's
worth
looking
into
yeah,
what's
achievable
or
more
isn't
more
the
strategy,
because
the
concern
is
like,
if
you
have
somebody
hammering
the
API
with
a
bunch
of
nonsense,
requests
that
cause
where
these
things
can
happen.
You
can
put
some
rate-limiting
in
front
of
it.
You
might
want
to
cancel
some
of
those
requests
that
come
in
already
limit.
E
Cancelling
a
bunch
of
the
things
like
taking
it
out
of
the
one
twist,
like
rate,
limiting
the
API
request.
It's
just
like
you
opening
your
connection
is
such
an
expensive
thing
that
you
don't
want
now
to
just
like
close
the
ones
that
you
already
have
open,
like
my
quite
the
collection
manager
decide
that
that's
what
I'm
saying.
A
A
But
what
the
place
that
doesn't
work
is
over
HTTP,
because
you
can
only
come
to
many
we'd
one
block
at
a
time
before
you
send
the
first
result,
but
you
have
to
read
all
of
the
block
before
you
can
send
any
results
with
the
HTTP
is
a
request,
immune
response.
You
can't
have
a
bi-directional
communication
where
those
recipients
can
use
WebSockets
and
stop.
If
we
don't
do
it
just
means
HTTP.
A
So
I
had
this
idea
of
like
what,
if
you
could
just
do
all
that
parallelization
on
the
client
and
send
loads
of
requests
in
parallel
to
the
service,
because
Harley's
high
density,
refactoring
of
the
unit
suppressant
Porter,
excuse
the
Bach
API.
But
suddenly
you
can
do
all
the
chunking
on
the
kind
and
called
Bach
API
loads
of
times
from
crime,
which
in
theory
will
be
a
little
costly,
turns
out
it's
actually
slowly
and
which
was
two
years
so
yeah
I
was
a
dead
end.
A
H
H
Is
this
is
how
I
do
everything
Inc?
Oh
we're
like
you,
you
send
all
your
data
stringing
and
you
receive
the
data
streaming
and
then
the
co
HT
terror
does
not
like
this.
Actually,
we
work
on
it.