►
From YouTube: 🚀IPFS Core Implementations 2020-05-18 🛰
Description
Meeting notes: https://github.com/ipfs/team-mgmt/issues/992#issuecomment-630273448
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
Hi
everyone-
this
is
the
I
professed
correlations
or
weekly
sync
for
a
Monday,
the
18th
of
May
2020,
I'm
brain
I
will
be
your
host.
We
are
going
to
discuss
all
the
top
initiatives
high
priority
issues
that
we
were
working
on.
We're
gonna
go
through
the
low
priority
initiatives
and
then
question
is
and
also
as
an
accessory
table.
So
first
thing
on
the
list
for
hypertonicity
shift
reasons
you
do
a
quick
update
on
JSOC
if
s
we
ship
this
morning,
0.44
that
includes
the
first
pass
of
cancelable
requests,
which
is
quite
neat,
set
up.
A
Everything
in
a
time
out.
If
you
can
use
play
room,
say
policies
on
the
abort
signal
component
to
go
in
the
stack
which
they
can
use
name
either
the
primary
current
or
the
user
passing
in
and
a
built-in
you,
scott
cool
I,
think
there
has
data
store
for
the
browser
which
is
paid
to
school.
I
do
peak,
which
is
we
lose
some
of
the
in
direction
of
going
by
a
tasteful
level
and
then
never
pay
us.
A
A
Perceptively,
which
is
quite
nice
because
the
hood,
webpack
etc
would
undo
those
buildings,
and
so
now
we
have
control
over
which
ones
we
use
okay,
ever
so
slightly
smaller
month,
which
is
great
also
the
in
the
future
web.
But
I
must
stop
doing
this
automatic.
So
it's
important
to
get
this
thing,
so
that's
cool
and
that
is
it
jaesuk.
Yes,
there's
a
PR
I
did
I
did
a
release
and
I
did
a
blog
post
on
the
same
day,
literally.
B
Yeah
I
broke
up,
one
was
released,
it
could
have
been
changes,
a
small
patch
release
that
they
some
things
for
quickly
and
then
a
fix
for
it.
Ip
this
timeout
bug,
and
then
we
also
an
issue
we're
looking
to
canceled
a
someone's
I
think
we're
things.
Okay,
if
you
can't
hold
a
deep-sea
query,
you
might
pin
for
a
while.
So.
C
Yeah
sorry,
we
haven't
talked
for
two
weeks,
there's
a
little
bit
long,
and
so
last
time
we
talked
I
was
going
on
about
a
like
a
memory
leak
in
the
new
Postgres,
the
sequel
datastore,
but
it
turned
out
it
wasn't
a
memory
leak.
It
was
just
that
the
Postgres
server
was
running
a
hundred
percent
CPU
and
so
queries
were
executed
slowly,
things
were
getting
backed
up
and
that
was
bad
around.
But
could
you
use
indexes
text
indexes?
C
C
Calling
home
and
then
there's
a
hundred
sent
CPU
issue
just
went
away
and
it
just
started
working
great
if
I
want
to
learn,
says
it's.
My
internet
connections
unstable
but
hopefully,
I
can
get
through
this
quick
had
a
memory
leak
and
I
that
was
fixed
and
then
I
deployed
again
and
there
was
seem
to
still
be
a
memory
leak.
So
that's
not
good.
I
took
a
peep
off
profile
and
said,
sent
it
to
Martin.
C
C
Yeah
yeah
there
was
a
like
a
cube
which
I
think
was
resolved
and
so
I
updated
to
the
latest
version
of
quick
and
deployed
that,
and
then
I
saw
that
it
was
still
being
leaky,
so
yeah,
so
I'm
carrying
on
and
the
the
hydras.
Now
that
we
have
the
Postgres
they
just
all
have
like.
They
all
share
the
same
data
store
and
there's
around,
like
20
million
provider
records
in
there
that
they're
serving
right
now.
So
that's
super
cool
that
kind
of
goes
up
and
down
according
to
GC
because
they
don't
do
forever.
C
But
it's
sort
of
around
20
million
at
the
moment,
which
is
kind
of
cool
I
scaled
them.
Up
to
5.
We've
got
5
hydras
with
100
heads
each
that's
cool
over
the
weekend,
I
switched
to
using
the
P
the
P
R
stored,
datastore
pistil
datastore.
So
it's
not
in
memory
which
is
kind
of
cool
that
removed
about
10
5
to
10
gigs
of
like
RAM
usage,
which
is
which
is
pretty
mad,
but
I
did
have
GC.
He
enabled
at
first.
C
C
So
I
think
there's
still
some
investigation
to
do
then,
but
in
theory,
we've
just
removed
a
whole
chunk
of
RAM
that
we
were
using
and
which
is
great,
and
so
next
up
for
me
for
hydras
and
like
I'm,
not
I'm,
just
kind
of
working
on
this.
In
my
spare
time
at
the
moment,
cuz
I'm
working
on
a
DM
project
but
I
wanted
to
add,
add
five
more
hydras
get
the
older
DHT
boosters.
C
A
D
D
D
Recently
is
the
limitation
of
the
DNS
name?
Is
general
e,
the
DNS
back?
It
has
hard-coded
in
RFC
limits
of
how
long
a
single
label,
which
is
between
dots
in
a
domain
name,
can
be
and
the
overall
entire
name
how
long
that
can
be,
and
we
are
due
to
the
fact
that
we
are
sort
of
like
planning
to
switch
to
those
new
keys.
D
That
sort
of
like
got
populated
closer
on
our
timeline.
So
the
problem
here
is
that
if
we
switch
to
those
key
edie
255
19
Keys,
we
will
run
over
the
63
character
limit,
which
is
super
unfortunate,
because
we
just
landed
subdomain
gateways
and
now
the
cid
of
like
we
plan
to
switch
the
default
trip
like
the
default
keys
to
that
new
new
thing,
we
will
run
over
that
limit,
so
that
means
I.
Pianist
websites
would
have
problem
because
you
are
not
able
to
resolve
name
that's
over,
like
has
the
label
over
that
limit.
D
D
First
problem
like
there
are
two
issues
like
one
is
that
if
we
switch
to
this
new
key
standard,
we
will
run
over
the
limit
in
our
default
IP
and
aspect.
So
that's
like
the
problem
one
and
the
second
problem
is
like
the
generic
one
of
at
any
point.
Someone
can
pick
a
longer
hash,
for
example,
sha-512
and
that's
super
long,
and
it
will
also
like
run
over
that
limit.
So
it's
like
and
then
didn't
you
get
a
discussion
event
turns
out,
even
like
slack
has
like
hard-coded
dns
back
into
the
way
the
links
are
detected.
D
So
you
can
see
that
even
like
those
two
characters
here
were
not
picked
up.
So
it's
like
super
unfortunate
and
like
the
while.
A
solution
is
to
just
split
after
after
at
the
limit
of
63
characters
and
then
the
remainder
remainder
will
be
in
the
next
sub
label.
That's
fine
from
the
security
perspective
because
we
would
maximize
the
size
of
the
label
which
is
used
for
calculating
origin
and
on
our
gateways.
Anyway.
D
Are
there
better
ways,
I,
don't
think
I,
don't
think
so?
There's
a
longer
discussion,
I
think
Stephen
may
want
to
have
like
a
design
design
meeting.
Maybe
later
and
there's
like
a
separate
section
for
that.
It's
just
like
me
highlighting
the
dishes
still
open.
We
have
IP
NS
subdomain
support
in
meta
mask
and
there's
also
like
open
TR
from
in
fira
to
support
wild
card.
Subdomain
gateway
so,
for
example,
to
allow
having
a
subdomain
per
user
of
gateway
provider,
so
they
have
like
their
own
origin.
Isolation.
D
B
Well,
this
much
of
stuff
at
the
bottom
about
all
the
problems
in
trade-offs,
yeah
one
like
like.
We
could
use
a
shorter
encoding
of
things,
but
that
was
require
like
yeah
well,
making
changes
that
instantly
to
be
or
not
making
DPM
to
live
to
be,
which
would
then
mean
that
we
would.
It
would
be
harder
to
make
these
Eph,
let's
see
if
we
wanted
make
them
later
so
kind
of
stuck
there.
Yeah.
D
I
think
so
we
should
at
least
come
up
with
some
like
rules
of
thumb,
the
collect
my
personal
rule
of
thumb.
Please
do
not
like
compromise
future
optimizations,
like
the
one
like
ten
percent
percent
on
the
lip
p2p
level,
for
a
quick
win
for
the
quick
ux
win
today.
There
are
things
like
that,
so
I
think
we
should
have
a
conversation
for
that.
E
C
E
Which
we
talked
about,
we
we've
got
a
few
things
for
FS
0.6,
including
just
making
the
DHT
a
little
more
performant.
Reducing
allocations,
fixing
some
bug
some
some
things
in
improving
the
query
times.
By
stretching
the
queries,
we're
gonna,
try
and
land
we
spent
most
of
last
week
doing
some.
E
Some
results
include
having
a
dial
back
protocol
would
be
good
and
would,
at
the
very
least,
make
it
easy
for
public
gateways
to
get
in
front
to
get
data
out
of
nodes
that
are
behind,
not
so
that
you
know
if
you're
running,
you
know
I
confess
desktop
at
home.
Behind
your,
not
nothing
is
configured,
you
don't
have
you
PNP
or
anything
like
that
at
least
you'll
still
be
able
to
get
your
data
from
the
gateways.
E
It's
not
as
good
as
having
say
like
full
hole,
punching
setup
for
letting
two
nodes
that
are
both
behind
nots
talk
to
each
other,
but
that
seems
like
another
another
thing
to
prioritize
as
well,
so
things
like
WebRTC
may
be
using
turn,
just
as
it
is.
Even
if
we
don't
want
to
go
the
full
auto
relay
route,
yeah
those
those
sorts
of
things.
E
E
G
Very
quick
update
so
two
weeks
ago,
I
said
something
to
the
effect
of
yeah
I
need
to
prevent
this
one
thing
and
then
we're
ready
five
thousand
lines
of
changes.
Later
literally,
it
is
ready,
so
everything
converges
with
everything
performance
that
looks
exactly
where
it's
supposed
to
be.
I
am
right
now,
working
in
parallel
on
writing,
documentation
for
all
this
and
getting.
G
H
Britain's,
yes,
so,
basically
in
the
it's
everything
finished
now,
so
the
the
form
a
stone
with
a
key
book
indication
integration
is,
then
everything
is
merging.
The
0.28
release,
branch
and
I've
been
discussing
some
stuff
with
Lydell
regarding
the
metadata
book,
and
initially
we
didn't
have
it
in
the
scope
for
this
release
and
for
these
improvements
in
the
piercer.
But
since
I
was
in
this
context
in
it
would
be
like
two
or
three
days
of
work.
We
decided
to
also
implement
it.
It's
already
implemented
and
also
merged
in
the
0.28
release.
H
So
basically
the
next
steps
now
are
relieved
as
your
dot.
Twenty
eight,
which
we
are
honest
and
I,
would
say.
Basically
we
everything
last
week,
Jacob
will
be
out
this
week,
but
we
plan
to
release
it
until
it
comes
back
next
week
and
with
that
also
integrate
this
on
Jessica
fest,
which
I'm
currently
working
on,
and
it's
basically.
C
A
A
Migration
to
the
multi,
it's
multi
hash
keys
in
the
bookstore
say
we
had
a.
We
had
a
meeting
about
this
last
Tuesday,
which
was
very
active.
We
decided
way
forward
in
the
store
blocked
by
montage,
and
then
the
rust
team
is
going
to
look
at
storing
metadata
around
those
blocks,
which
there's
a
link
in
the
notes
to
the
resolution
of
pinning
system
revamp
in
choice
ipfs.
So
this
was
kind
of
predicated
on
the
app
of
the
multi
house
of
keys
in
the
block
school
discussion,
which
has
happened
so
you
can
afford
with
this
now
gonna.
A
A
I
So
one
of
the
things
today
cannot
run
into
as
I
was
trying
to
write
a
proposal
for
sharing
with
community
about
the
assuring
a
node
across
browser
tubs
things
that
came
up
is
how
do
we
share
the
configuration
between
those
nodes
across
the
tops,
because,
if
you
sure
the
nodes-
and
you
have
to
share
the
configuration
at
least
the
way
syncs
them
now,
I
try
to
go
through
all
the
configuration
options
that
are
at
least
documented
to
try
to
put
some
notes.
I.
I
Think
generally,
it
might
be
a
good
idea
to
reduce
some
of
the
configuration
and
choose
what
makes
sense
for
the
browser
and
have
that
and
have
that
out
of
the
box.
If
people
want
to
opt
out
of
it,
since
they
can
use
their
own
thing,
but
a
few
shares
and
you
end
up
sharing
the
configuration
so
I
could
use
some
help.
I
First
of
all,
looking
at
my
notes,
whether
they
make
sense,
they're,
all
or
not
and
generally
I-
think
it's
kind
of
more
open-ended
conversation,
because
I
think,
even
though
this
is
specific
to
the
context
that
I'm
kind
of
looking
at
I
think
it's
broader
than
that.
If
you
think
about
native
ipfs
support
within
the
browsers
like
what
brave
is
doing
an
opera
is
doing,
you
can
imagine
that
there
will
be
no
configuration
that
every
table
makes
or
every
better
does
their
own,
because
I
kept
has
no
D
is
part
of
the
browser.
I
I
They
don't
necessarily
have
to
be
the
whole
node
configuration
things
that
you
set
up
ahead
of
time
and
I
think
wherever
it's
possible,
it
probably
makes
sense
to
do
that
there
and
the
other
one
is
some
other
configuration
seems
like
user.
Ought
to
be
making
the
choices
like
which
nodes
it's
willing
to
share
data
or
which
would
which
form
to
join
versus
in
better
of
the
application.
To
do
so.
So.