►
From YouTube: 🚀IPFS Core Implementations 2020-05-04 🛰
Description
Meeting notes: https://github.com/ipfs/team-mgmt/issues/992#issuecomment-623544245
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
Hello:
everyone
welcome
to
the
IVFs
weekly
sync
for
Monday,
a
4th
of
May
2020,
happy
Star
Wars
day
everyone
we
are
going
to
go
through
the
list
of
high
priority
initiatives
and
then
the
low
priority
initiatives
and
then
ask
some
questions
and
fun
things
like
that.
So
the
first
item
on
our
agenda
is
upcoming
and
ship
releases.
I.
Don't
think
that
anything
to
talk
about
here,
no
ships,
anything
right.
A
B
Oh
yeah,
which
we
shipped
ipfs
out
it
happened,
is
there
it's
live
it's
on
the
interwebs,
so
if
you
don't
have
it
get
it
and
then
we
have
a
patch
release
being
slated
for
some
improvements,
some
time
release
and
quick.
So
if
you
are
running
experimental
quick,
you
may
have
noticed
some
CPU
spikes
that
is
being
worked
on
should
hopefully
have
an
RC
out
today,
for
that
it
looks
like
and
then
yeah
work
on
passing
it
out.
A
B
B
So
the
more
the
network
upgrades
the
better
that's
all
gonna
get
so
upgrade,
and
then
this
week,
in
addition
to
the
bugs
for
the
patch
release,
we're
working
on
things
406,
which
is
quick
by
default,
which
depends
on
some
of
the
issues
that
were
working
through
right
now
with
patches
and
hopefully
we'll
have
the
ed2
509
key
Interop
working
this
week.
Right
thanks
to
vasco.
We
now
have
that
interrupts
testing
on
the
lid
p2p
side
of
things.
B
We
just
need
to
bubble
that
up
to
ipfs
GS
ipfs,
probably
isn't
going
to
get
the
ability
to
run
keys
before
ipfs
ODOT
six
comes
out,
but
we're
at
least
going
to
make
sure
that
inter
up
is
working,
there
are
some
issues
with
importing
exporting
PEM
files
that
we
have
to
resolve
for
jeaious
and
then
yeah
billy's
interrupt.
We've
done
some
preliminary
Interop
testing,
but
we
need
to
get
that
full
suite
of
tests
all
set
up,
so
our
SH
is
working
through
the
final
benchmarking
therefore
go
yeah
and
then
you.
C
A
D
Yep
so
from
the
recent
go
active,
has
open
files
release
we've
released
active
as
desktop,
which
is
running
that
version,
and
that
brings
subdomains
on
localhost
to
access
the
staff
users.
So
hopefully,
that
is
that
update
is
slowly
rolling
to
the
desktop
users.
I've
dated
subdomain
section
at
the
new
Docs
portal
and
we
are
in
the
process
of
migrating,
do
a
blink
which
is
our
like
canonical
subdomain
public
gateway
from
it
like
it
is
a
subdomain
gateway.
However,
it
was
implemented
in
nginx.
D
We
are
now
migrating
it
to
go
up,
EFS,
basically
removing
custom,
gen-x
hunting
and
letting
go
like
you.
First
do
its
thing.
Most
of
things
should
work.
I.
Think
the
remaining
tasks
that
my
code
is
tackling
is
the
seamless
redirect
which
automatically
upgrades
CID.
If
you
use
subdomain
gateway,
SS
path,
gateway
that
hopefully
should
be
resolved
soon.
I
did
not
start
J's
ipfs.
Yet
it's
on
on
the
list
so
eventually
yeah,
that's
it.
From
my
end,
Oh.
E
Yeah,
of
course,
so
Hydra
this
week
has
been
spent
a
lot
of
my
time,
debugging
its
kind
of
effectiveness,
and
we
were
finding
that
the
Hydra
nodes
were
very
hyper.
Heads
were
very,
very
slow
to
connect
to.
They
were
transferring
data
really
slowly.
The
HTTP
API
was
the
same
sort
of
situation,
but
there
was
no
like
looking
at
the
graphs
there's
no
CPU
or
memory
problems
so
like
what
was
going
on.
I
exposed
HTTP
API
to
list
some
peers
to
find
out.
E
If,
like
there,
we
just
had
too
many
connections,
and
we
were
super
connected
to
our
Hydra
heads,
which
exposed
the
problem.
That
was
the
within
the
kind
of
infrastructure
setup
in
the
kubernetes.
Has
this
kind
of
nat,
auto
nap
thing
for
for
no
port
services,
whereby,
like
all
of
the
incoming
connections,
will
look
like
they
come
local
connections?
So
that
wasn't
good,
because
our
heads
were
thinking
that
their
peers
were
local
when
actually
they
were
not.
So
we
fixed
that.
But
then
that
wasn't
really
the
problem.
E
That
was
just
something
else
and
like
eventually,
I
figured
out
that
the
VMS
in
digitalocean,
the
standard
ones,
don't
have
dedicated
CPU
they're
like
shared
CPU
and
what
like
digging
deeper
into
it.
I
spent
with
Anton
as
well.
Looking
at
this,
and
it
looked
like
it
looked
like
the
the
network
was
just
not
processing
incoming
connections,
quick
enough
and
things
just
weren't
weren't
chugging
long.
We
had
like
queues
filling
up.
E
We
were
tweaking
that
used
to
make
them
longer,
but
yeah
it
wasn't
wasn't
working
because
as
soon
as
we
made
the
queues,
the
longer
the
keys
would
fill
up
and
and
it
you
know
it
wasn't,
wasn't
helping
but
like
and
when
we
switched
to
dedicate
to
CPU
machines.
Things
magically
works
seems
obvious
now,
but
I
think
it
was
just
that
the
CP,
like
the
the
chair,
CPUs,
wasn't
like
getting
through
the
connections
quick
enough
for
the
application,
so
things
were
just
filling
up
so
right
now
they
were
there's
like
five
hydras
running
with
50
heads.
E
They
seemed
pretty
stable
to
me
like
anything
running.
They
were
running
for
24
hours
or
so
and
and
they
like
I'm,
seeing
now
that
I'm
doing
fine,
profs
queries
and
stuff
when
I
PFS
I'm
seeing
them
appear
in
the
results
they're
being
queried
for
information,
which
is
super
cool
and
yeah.
The
only
other
thing
that
I
found
out
this
week
is
that
it's
the
we
tried
to
make
a
datastore
that
is
shared
by
all
hydras,
but
there's
a
memory
leak
somewhere.
E
The
memory
just
goes
linearly
progresses
until
it
runs
out
and
then
the
thing
has
to
be
repeated.
So
there's
a
there's,
a
problem
there.
It
could
be
that
there's
a
memory
leak
in
the
datastore,
the
sequel,
datastore
package,
or
it
could
be
that
the
like
we're
just.
We
just
have
lots
and
lots
of
queries
to
do
so.
We're
just
running
out
of
the
Postgres
connections
and
there's
just
a
big
backlog
and
things
that
thing
up.
E
I,
try
to
add
a
connection
problem,
PG
bouncer,
if
you've
heard
of
that,
but
didn't
really
make
any
difference,
so
I
switch
back
to
using
a
nobody
beat
and
they
store
temporarily,
and
that
seems
that
seems
ok
for
now,
which
is
good.
So
we've
got
stable,
hydras
running
at
the
moment
and
they
seem
to
be
responding
to
HD
queries,
which
is
great,
so
they
are
in
a
much
better,
much
better
State.
E
E
E
F
As
a
brief
content,
routing
thing
and
related
to
Dirk's
question
about
how
much
the
network
has
upgraded,
I,
don't
know
the
numbers
offhand,
but
I
think
the
answer
is
not
enough.
In
particular,
there
are
some
nodes
that
are
running
like
gawky,
fs0,
4.14
and
4.20
like
a
lot
of
them,
which
is
why
figuring
out
ways
to
like
more
subtly
upgrade
the
network
or
allow
people
to
upgrade
without
leaving
the
other
guys
behind,
but
still
kind
of
ditching
them
is
important.
F
F
H
G
Yes,
so
exciting
stuff,
this
past
week
as
part
of
this
research
platform,
it
I'm
putting
together
for
being
able
to
test
chunking,
we
separated
a
library
the
base
allows
you
to
ingest
the
stream
synchronously
and
write
it
to
a
circular
buffer,
maintaining
called
the
weird
stuff
like
what
happens
when
you
get
to
the
end
of
the
buffer.
What
happens
when
you
have
high
contention?
What
happens
when
multiple,
when
multiple
clients
want
to
read
from
the
buffer,
but
at
the
same
time
the
thing
that
reads
into
it
wants
to
write
more
stuff.
G
This
library,
at
this
point
has
stabilized
I,
basically
have
nothing
more
to
add
or
to
change
on
it.
I
need
best
first
round
of
documentation.
Review
I
still
would
like
people
to
get
me
an
API
verdict
like
do.
We
need
to
modify
anything
at
all.
The
people
who
are
kind
of
responsible
for
this
are
tacked
on
the
issue
in
question,
because
it's
less,
you
probably
know
super
difficult
to
test.
The
synchronous
but
synchronous
libraries
like
that
I
have
been
essentially
fuzzing.
G
This
entire
concert
for
the
past
six
days.
Now
we
basically
multiple
versions
of
the
same
program
running
some
with
multi-threading
sand
without
sand
with
a
racist,
a
person
without
some
with
nice
priorities
and
with
high
priority
and
all
them
keep
doing
the
same
thing
over
and
over
again,
and
this
would
expose
various
problems
when
you
know
when
there
is
a
corner
case,
and
so
it
doesn't
match
correctly,
but
for
the
six
days
nothing
has
failed.
I
will
name
it
spinning
for
another
week
and
then
I
basically
can
say
like
yeah.
G
This
thing
is
rock
solid
and
we
can
put
it
in
go
ipfs
as
the
usually
just
gesture
interface,
the
actual
tool
itself.
I
am
currently
wrapping
up.
Writing
the
first
new
chunker,
essentially
I
had
an
implementation
like
way
way
back
something
similar,
and
it's
proving
very
interesting,
because
all
the
error
checking
that
I
added
is
now
yelling
at
me
that
this
is
not
correct,
and
this
is
shorter.
This
is
long,
so
this
is
like
almost
there,
but
not
quite
there
yet,
and
that's
pretty
much
all
I
have
for
now.
J
We've
been
in
contact
with
all
of
you
about
that
since
before
the
well
before
the
launch,
so
we're
working
through
that
and
then
working
towards
the
UNIX
FS
implementation,
which
right
now
we're
just
getting
you
know
looking
at
ways
rusty
ways
to
do:
protobuf,
encoding
and
decoding,
and
things
like
that.
So
that's
that's
Burrell!
Right
now,.
J
G
It's
definitely
something
from
fuse.
It
will
be
very
useful
in
the
coming
months
and
the
overhead
to
support
it.
It's
actually
going
to
be
minimal,
so
I
very
strongly
recommend
you
guys
look
at
1.5
as
it
is
in
the
specular
depository,
because,
like
the
amount
of
extra
water
tool
you
have
to
do
while
you
are
in
the
context,
this
minimum
great,
we
will
definitely.
K
Yes,
so
last
we
take
our
aligned
with
Jacob
about
some
open
questions
that
I
had
in
the
piercer
persistence
about.
We
should
go
regarding
the
proto
book
and
other
stuff,
and
so
we
with
disalignment
I
got.
We
are
with
the
persistence
for
the
address
book
in
proto
book.
Ready
for
review.
Jacob
already
gave
some
input
and
I
also
have
the
key
book
implementation
ready
for
review.
K
A
Cool
next
one
is
countable
requests
into
ffs,
and
this
is
me
so
the
PR
is
there:
it's
all
done,
it's
just
awaiting
review
and
merge
and
then
releasing
it's
quite
big.
So
it's
where
we're
going
to
take
a
bit
of
time
to
figure
to
go
through
it.
So
if
anyone
else
wants
to
jump
in
and
help
that
would
be
appreciated.
G
A
A
A
A
L
C
Don't
cancel
your
design
reviews
design
reviews
are
basically
a
time
for
us
to
make
decisions
yeah.
So
what
happens
like
people
make
proposals,
but
somebody
so
gets
stuck
in
the
guillotine,
will
just
sit
down
and
actually
make
a
decision
on
it.
So
if
you
want
that
happen,
you
propose
a
design
review
session.
You
can
very
clear
agenda
in
a
very
clear
decision.
They
want
to
make
everyone
gets
together.
The
design
makes
a
decision
yeah
because
it's
the
problems
otherwise
like.
If
you
always
think
what
can
happen.
It's
like
the
discussion
is
go
on
for.
C
C
Yes,
Josh
go
I
profess
is
gonna,
try
out
to
get
a
get
flow
like
release
model.
Now,
if
you
don't
familiar
yep,
the
way
usually
works
is,
if
you
have
it
develop
branch
walls.
When
it
goes
past
years,
like
always
stable
I,
don't
do
that
master
is
their
development
branch
access,
but
all
users
are
usually
used
to
that's
what
we're
already
doing.
We
already
have
a
release
branch,
which
is
like
Carly
latest
release
branch.
The
main
difference
here
is
that
we
do
now
is
a
master's,
have
a
blocked
key
person
even
in
for
an
RC.
C
We
start
the
RC
we're
gonna
off
to
a
release
like
whatever
version
branch
and
then
we're
not
gonna.
Add
new
features
that
release
branch.
While
we
continue
I
had
a
new
features
master
then
we're
ready
to
cut
the
police.
We're
gonna,
merge
the
RC
into
the
police,
france
tag,
release
there
and
then
merged
the
release,
man
branch
back
into
master.
Now
the
idea
behind
this
flow
is
like
right.
We're
switching
to
a
a
fixed
release.
Cadence
to
try
to
like
get
things
out,
throw
faster
is
the
past.
C
What
would
happen
is
like
the
candidates
are
sit
there
trucking
along
building
things,
and
then
we
goes.
Oh,
we
haven't
cut
of
release
like
six
months
or
whatever
this
user
is
like,
even
if,
like
we
don't
have
everything
we
want
like
we
just
like
keep
on
saying.
Okay,
we're
gonna
cut
of
release.
We
let
it
bake
for
a
bit
because
of
release.
We've
done,
as
it
keeps
pushing
keeps
the
user,
but
because
it's
going
to
be
very
fast
release.
Kids,
it's
a
six-week
release
cycle.
Reach.
C
Release
really
has
nine
weeks
total
of
work
in
it
overlapping
a
bit
because
we
switch
these
kittens
we
don't
want
to
have.
Situations
were
like
well
we're
an
RC,
because
we
could
meet
RC
in
two
thirds
of
the
time
we
do
like
we're,
not
gonna
cut
there.
It's
like
we're,
not
the
emergency
features,
the
master
that's
gonna
suck,
so
instead
we're
gonna
be
in
this
mode
of
like
okay,
we're
constantly
emerging
to
to
master.
But
then
we
only
merge
to
like
the
release
branch,
occasionally
McCausland
where's,
no
master,
but
we're
cutting
in
our
seat.
C
You
have
this
release.
Branch
running,
it's
gonna
be
stable.
This
so
can
be
tricky
for
us,
because
I
have
so
many
repos
we're
not
going
to
keep
these
different
branches
all
over.
Our
different
repos
I'm
not
sure
how
to
deal
with
that.
At
the
moment.
We're
always
gonna
keep
on
merging
and,
like
that's
gonna
force
us
to
basically
cut
like
separate
release
branches
for
all
these,
the
repos
I
don't
want
to
run
get
flow
over
every
single
thing.
We
have
uh-huh.
Let's
do
everything
in
lockstep
as
we
merge
pieces.
C
A
Come
stuff
yeah,
what
is
old
is
new
again
I
love.
It
most
I'll
point
out
we're
doing
something
quite
similar
when
today
sacrifice
and
is
working
well
going
pretty
good
release,
cadence,
quick,
no
way
before
you
go,
we
have
to
welcome
Iraqi
he's
joining
the
team.
It's
his
first
day.
Can
we
get
a
round
of
applause
in
a
big
like
hello?
Thank.