►
From YouTube: 🖧 IPLD Bi-Weekly Sync 🙌🏽 2018-11-12
Description
A bi-weekly meeting to sync up on all IPLD related topics. It's open for everyone and recorded. https://github.com/ipfs/team-mgmt/issues/720
A
A
A
B
A
I
yeah,
it's
on
my
I
can
do
two
things
at
the
same
time.
So
the
head
is
here:
okay,
now
all
right,
thanks
Alan
all
right,
then
the
next
thing
is
yeah.
Please
put
your
name
on
the
attendance
lists.
If
you
are
attending
and
then
really
quick
I
want
to
know
like
what
should
this
call
be
about
and
what's
true,
the
format
is
or
should
it
be
just
like
the
sync
meeting
like
we
do
for
go
in
for
chase
world,
or
is
it
like
something
else
so
I
would
propose
that
I
see.
A
Disability
basically
also
like
took
the
time
to
discuss
things
and
I
really
want
to
keep
the
the
round
of
updates
really
short,
so
I
rather
call
it
awareness
update
rather
than
status,
update.
So
really,
really
quick,
so
people
know
okay,
I
working
on
graph
singing.
I
working
on
formats
are
working
on
something
else
and
I
don't
really
care
about
like
which
PR
suitable,
so
something
that's
at
least
that
take
I
have
yeah.
C
So
yeah
I
mean
like,
from
my
perspective,
there's
not
so
much
going
on
and
I
feel
D
right
now
that
you
can't
sort
of
be
up
to
date,
right
like
like,
and
the
other
calls
there's
so
much
stuff
going
on
that
people
really
depend
on
people
kind
of
updating
on
what's
going
on,
so
we
can
definitely
cut
a
lot
of
that
out,
I'm
more
interested
in
like
Trent.
We
have
a
lot
of
open
threads
that
don't
seem
to
be
getting
close
out.
C
A
Give
me
say
this
updates
or
any
other
any
agenda
items
that
come
up
from
the
community
all
right
and
what
I
also
have
on
my
agenda
Liz
yeah
currently,
so
we
still
have
this
weird
thing
that
we
kind
of
our
ipfs,
but
also
I
PLD.
So
should
we
still
put
our
notes
on
where
we
have
the
go
now
nodes
and
trace
notes,
or
should
we
put
them
on
summary
posture
and
IPL
you
like?
A
A
B
A
A
What
else
yeah
then
I
have
me
the
biggest
items
probably
discuss
graph
sync
I
kind
of
like
we
have
three
proposals
and
just
getting
on
some
like
yeah
agreement
or
how
we
move
and
so
on.
I'm,
not
sure
if
you
should
discuss
this
first
because
it
probably
takes
like
it
takes
the
whole
meeting
anyway.
Even
if
we
start
in
ten
minutes
so
probably
I
would
like.
Let
me
see
what
other
items
we
have.
A
D
There's
actually
two
things
so
one
is
the
I
submitted
to
the
rebooting
of
a
trust
in
Toronto
and
Jen
months
ago,
there's
about
using
IP
eld
as
a
generalized
framework
for
the
D
ID
methods.
So
D
IDs
are
decentralized
identifiers
for
self,
sovereign,
digital
identity,
and
there
are
some
security
vulnerabilities
in
the
current
approach
with
Jason,
LD
and
I
thought.
The
IP
LD
would
be
much
more
secure
approach,
so
I
present
to
that
along
with
Christian,
led
fits
from
you
port
protocol.
D
We
wrote
it
up
and
there
are
still
some
shortcomings
with
mostly
within
a
lot
of
this.
Work
is
coming
out
of
the
w3c
work,
credentials,
working
group
and
they're
very
much
focused
on
the
HTTP
protocol.
So
everything
in
their
viewpoint,
is
a
URL
and
so
I
presented
a
much
more
secure
framework.
So
just
for
information
I'm
still
battling
the
other
group
working
group
to
credential
group
to
include
some
syntax.
That
would
be
more
secure,
including
the
reserving
slash
as
a
reserved
for
IP
LD
links,
so
I
think
just
read
the
paper.
D
D
So
I
just
really
appreciate
your
feedback,
mostly
because
I
think
the
current
approach
and
I
actually
talked
to
one
about
this
back
at
the
D
Web
Summit
is
that
a
lot
of
this
json-ld
syntax
is
just
broken.
I
think
it
really
begs
for
the
need
for
why
we
need
multi
key
and
the
multi
key
is
very
cumbersome
with
representing
the
X
and
the
y
for
the
elliptic
curves
and
I'm.
Actually
I've
been
more
of
a
proponent
of
the
rather
than
the
Jose,
which
is
Java
JavaScript.
D
C
That's
awesome
a
couple
quick
questions
about
that.
One
is:
would
this
actually
change
the
D
ID,
so
inside
of
the
D
ID
it
references
a
particular
public
key.
Would
we
end
up
referencing
a
CID
to
a
data
structure
instead
of
a
public
key,
or
would
it
still
be
a
public
key?
But
then,
when
you
go
and
look
up
information
about
that,
you
would
end
up
getting
at
they'll,
be
instead
of
json-ld,
because
right
now
those
are
really
separated
in
this
back
yep.
C
Okay,
cool,
very
cool
and
you're
coming
to
a
multi
key,
so
just
we're
looking
at
there's
a
bunch
of
stuff
that
needs
to
happen
in
multi
formats.
It's
it's
not
in
the
best
state.
We
want
to
get
one.
We
want
to
get
some
of
the
stuff
in
the
standards
track
that
we
feel
like
it's
pretty
much
like
ready
to
go
like
multi
hash
and
then,
after
the
process
is
set
up
where
things
are
actually
maybe
on
the
cinder
track
and
ICS.
C
D
Yeah
and
actually
we're
giving
that
a
lot
of
thought
and
I
think
and
I
really
is
going
towards
the
chose,
a
framework
for
representing
the
curves.
Unfortunately,
there
are
some
of
the
idea,
Polly
and
cha-cha
algorithms
that
are
actually
not
standardized,
yet
so
they're
really
at
the
edge,
but
those
are
actually
where
a
lot
of
the
zero
knowledge
proof
server
actually
at
leveraging.
D
So
those
are
actually
not
in
any
ATF
yet
so
is
that
the
the
cutting
edge
so
the
most,
because
that's
actually
work
where
I
think
a
lot
of
the
work
needs
to
be
is
standardized
approaches
that
standardized
vocabulary
and
my
point
of
actually
using
IP
LD,
is
that
we
actually
could
have
self
resolving
keys.
Actually,
like
you
used,
say,
you've
created
Polly
2020
with
the
cha-cha
algorithm.
Well,
here's
what
I
mean
by
that
and
a
lot
of
it
is
namespace
registration
where
a
lot
of
it,
if
you
have
a
whatever
sha-2
to
the
g6
algorithm.
D
Well,
what
do
you
mean
by
that?
That's
who
says
who?
What's
that's?
What
is
that
sha-2
two
five?
Six?
Where
is
that
register
work
and
we've
created
greement,
and
that
gets
me
into
this
sort
of
like
the
next
topic?
I
actually
and
I
apologize
I've
been
off
the
radar
and
not
responding
to
my
github
mentions
Steven.
So
as
far
as
a
typing
system,
with
IP
LD.
So
like
the
the
challenge,
is
that
a
lot
of
the
rdf
models
with
symptom
a
semantic
web
ultimately
have
a
simple
routing
issue
where
you
get
to
the.
D
Ultimately,
everything
in
the
RDF
world
is
a
as
a
route
of
a
URL,
and
so
that
isn't
like
you
ultimately
need
a
more
robust
representation
of
the
types
and
I
think
you
can
have
simple
data
types
or
generic
data
types.
We
actually,
as
you
get
more
complicated
in
the
data
types
you
actually
have
to
have
some
some
lightweight
ontology
or
in
linked
data.
D
D
I,
just
put
it
as
another
link
in
the
next
topic,
which
is
that
just
some
notes
on
the
topic-
and
you
might
not
be
able
to
resolve
that
because
I'm
on
a
terrible,
terrible
internet
access,
so
I
just
moved
to
Chicago
and
I
have
like
three
mega
bits
up
and
down
and
a
half
megabit
up.
So
that's
I
just
stored
it
an
IP
FS.
So
it
might
not
work
too
well.
A
All
right,
Thank,
You,
Johnny,
so
I
guess
we
can
raise
minimus
and
we
yeah.
We
can
then
go
to
the
graph
sing
stuff,
I
guess.
A
C
C
You
know,
and-
and
these
are
just
sort
of
like
if
you
a
set,
if
we
first
said,
look
like-
let's
lay
down
the
framework
for
doing
a
more
RPC
based
approach,
then
on
top
of
that
start,
layering
in
certain
RPC
methods
to
enable
these
different,
these
different
replication
schemes.
That
would
give
us
like
a
much
better
path
to
move
forward
in
a
much
more
modular
way
than
trying
to
have
an
entire
proposal
for
not
just
essentially
like
the
interface
and
all
the
infrastructural
changes
to
make
that
happen.
C
We
can
hopefully
start
to
simulate
out
some
of
these
replications
use
cases
on
different
Network
conditions
and
different
datum
like
different
types
of
data,
and
so
I
have
some
code
in
a
PR
to
try
and
create
like
a
simulator
video
with
that,
but
another
thing
that
we
should
block
any
work
until
we
have
like
all
of
this
simulation
logic
up
and
we
know
exactly
what
use
cases
different
replicate
or
solved
better.
We
can
start
to
implement
now
it's
by
moving
forward
with
the
the
common
API
s
and
then
start
implementing
the
base.
C
E
Of
a
very
clear
distinction
machines
like
Network,
vertical
stuff
and
user
facing
at
the
eyes,
feel
like
for
user
facing
at
the
eyes
a
yet
Merkle
pass.
He
like
may
be
important,
but
it's
not
necessarily
critical.
But
it's
like
network
stuff
is
my
critical
but
didn't
like
forget
many
that
so
much
like
they're
different.
C
Yeah
me:
well,
so
we
need
it.
We
need
we're.
Gonna
need
an
AP
after
getting
a
block,
and
then
we
probably
want
an
AP
after
getting
many
blocks,
or
we
can
just
parallel
eyes
that
that
one
get
block
hull
like
we
can,
we
can
sort
of
see
like
which
network
conditions
play
out
better
for
each
of
those,
but
I
think
is
it
like
if
we
haven't
get
many
woman's,
we'll
just
use
that
and
and
also
we
want,
we
need
to
figure
out
like
what
happens
when
somebody
doesn't
have
a
block.
C
How
do
they
sin
that
rejection,
like
that's
an
issue
in
every
single
one
of
these
specs
that
we
haven't
figured
out?
So
these
are
like
all
commonalities
and
then
yeah
I
mean
so
for
the
user
facing.
If
you
guys,
though,
like
I,
definitely
like
want
an
API
that
says.
Pin
this
you
know
I
filled
your
selector
like
I,
really
want
that.
So
we're
gonna
need
like
a
parser
for
selectors,
no
matter
what
so,
we
should
like
in
parallel
just
work
on
the
selector
spec
and
triple
defined,
ran.
Implementations
also
wonders.
E
That
are
perceived
may
actually
not
be
the
best
model
for
this,
because,
like
at
least
for
aggressing,
be
you
had
a
like
getting
back
a
stream
of
blocks
it.
It
all.
Look
like
this
yeah,
you
get
back
a
bag
of
data
before
a
request,
and
you
may
actually
want
to
cancel
sequester
naman
consume
that
it
already
received.
So
it's
not
quite
like
I,
send
a
single
request
and
they
get
a
response.
It's
more
of
like
a
I
end
out.
E
It
requested
one
or
more
nodes,
and
this
request
may
even
share
the
same
ie
and
then
from
one
or
more
nodes.
I
get
one
or
more
like
objects,
god
there's
like
it
often
like
we
do
is
you'd
only
want
to
block
on
this
request,
because
if
you
do
that,
then
you
have
to
Kumasi
from
like
resources
use
locally.
If
you
walk
on
it,
get
like
a
blocking
call
for
every
single
request,
but.
C
Yeah
yeah,
I
guess
by
RPC
I,
didn't
mean
really
stringent
request.
Like
response
reply,
I
mainly
meant
that,
like
we
need
named
methods
with
options
that
get
sent
and
we
need
like
a
model
one
way
to
be
adding
those
in
the
future
and
then
like
that
yeah
we
should
probably
tried
it.
We
should
be
as
agnostic
as
possible
about
what
the
response
stream
looks
like
one.
B
E
Thing
we've
also
been
talking
about
is
a
new
multi
stream
protocol,
and
any
season
came
up
was
like
our
protocols
are
actually
more
like
endpoint
to
verticals.
So
one
potential
way
of
doing
this,
is
you
simplified
multiple
endpoints,
one
endpoint
for
every
single
method
which
allows
that
you'd
ever
it
gives
you
a
very
easy
way
to
like
tell
the
other
side
which
methods
you
support
or
like
a
Grassley
protocol
could
be
like
flash
crash,
think
flash
and
get
block
/
graphic
like
yes,
yep
blocks
flash
crash
link
flash
whatever.
E
A
All
right
so
I
think
what
we
now
have
is,
so
we
of
course
it
made
a
distinction
between
the
user
facing
staff
and
the
internal
communication
stuff.
Whatever
we
call
it
so
is
I.
A
A
So
it's
the
like:
can
you
find
agreement
on
what
Michael
basically
said
is
that
we
certainly
need
get
multiple
blocks
and
give
me
some
path
with
all
the
nodes
up
to
there,
for
the
worker.
Proof
proof
is
this
something
we
can
agree
on,
but
see
this
especially
with
a
graph
saying
see.
Proposal
is
about.
E
Again,
I
think
I
said
make
distinction
like
what
I
were
talking
about
like
I'm,
not
sure
we
even
really
want
a
get
multiple
blocks,
single
message:
RPC
like
the
way
we
this
miserable
oops,
have
want
list,
get
multiple
walk
in
grab.
Stick
I
assumed
it
would
be
like
a
single
message
that
has
multiple
requests
and
a
type
of
request
to
be
given
me
as
the
ID.
So
I
could
have
a
zipper
questions
and
give
me
this
path
and
give
me
this
the
idea
and
give
me
another
selector.
So
my
sense
like.
C
If,
like,
we
really
need,
get
many
and
like
I,
don't
know,
what's
happening
with
the
multi
stream
2.0
stuff
yet
but
like
if
we.
If
we
have
essentially
free
sub
streams,
I,
don't
know.
If
we
need
to
get
many
right,
we
can
just
use
the
single
get
for
that.
E
It's
like
everything
we
have,
but
the
clients
are
safe
with
the
client
sees
and
what
we
sent
over
the
network
because,
like
the
clients
perspective,
it's
very
useful
how
to
get
many,
especially
for
like
resource
tracking
so
like
that
feels
like
if
I
let
the
clients
or
backup
all
forgets,
and
it's
a
pass
it
off
the
bit,
swap
whatever
it
is
all
at
once.
Then
we
can
optimize
a
lot
of
things
from
the
center
for
one
day
pack
on
the
network.
E
It
has
a
client
so
like
fire
up
a
book
and
guess
we're
probably
not
gonna
fracture
properly
and
use
a
bunch
of
extra
resources
like
doing
overhead
to
be
the
only
one
since
it's
like
get
all
one
Amazon
like
yeah,
keep
them
distinct.
So
what
do
we
need
to
actually
happen?
Center
for
the
network,
and
what
do
we
need
to
have
mostly.
C
That's
what
I'm
saying
I'm
there's
a
constraint
right
now:
they're
just
making
like
me:
they
get
money
rather
than
sending
them
individually
or
and
if
that's
maybe
going
to
be
fixed
by
stream
2.0
or
something
like
that.
So
we
can
have
a
discussion
about
like
whether
or
not
that's
a
good,
optimization
I.
E
I
want
to
back
up
on
the
both
destructive
enough
stuff.
Actually,
thinking
about
that
again,
I,
don't
think
it's
gonna
be
more!
That's
gonna
work
well
in
this
case
because,
ideally
like
we
send
out
one
request,
it
has
a
bunch
of
like
several
questions
out
of
it.
I
just
really
I
never
accuses
you
for
packing
things
up
or
need
to
do
that.
It.
E
A
bit
like
yes,
it
simplifies
thank
you,
but
maybe
well
don't
forget
only
Hayden,
sorry,
yeah
yeah.
F
F
Does
anyone
know
you
know
whatever
I
want
all
the
people
who
are
in
this
room
and
I
asked
some
people
and
other
and
I
asked
you
know
my
peers
and
they
all
give
me
different
amount
in
different
users
and
then
I
I
smash
them
all
together
and
then
I
get
my
net
list
all
right
like?
Is
that
supported
or
not
supported
or
planning
on
the
exact
time?
I
guess,
but.
C
That's
not
the
way
that
we've
talked
about
it.
We've
mainly
talked
about
it.
It's
like
if
you're,
if
you're
thinking
that
means
you
have
something
in
cash.
So
that
means
that,
like
the
the
the
state
was
a
particular
CID
in
that
whole
graph
and
the
state
change
that
the
new
CID
now
with
changes
in
it,
and
so
you
have
the
old
graph
in
cash
state,
and
now
you
just
need
the
new
changes.
C
F
So
maybe
it
will
call
like
it
depends
what
you
mean
by
by
root.
So,
okay,
you
know,
if
you
have,
you
know,
let's
say
we
we
have
like
the
starting
state
of
the
document
and
every
time
you
make
a
change
on
it
like
a
get
tree.
F
You
just
point
to
the
end
right
and
so
my
version
of
the
tree
changes
and
your
version
of
the
tree
changes
and
now
I
need
to
synchronize
them
and
sort
of
smush
the
two
graphs
together
right,
and
maybe
we
can
pretend
for
ease
of
use
that
there's
a
very
easy
merge
function
that
if
I
make,
if
I
had
a
pointer
and
you
had
a
pointer
to
the
same
to
the
same
node
that
you
know
which
is
branches
right
and
and
then
the
user
figures
it
out
from
there.
Yeah.
C
F
E
Out
of
week,
so
merging
I
think
is
outside
of
the
scope
of
bit
sloppy
and
all
these
verticals
yeah.
What
you
do
is
like
you,
get
the
roots
from
all
of
your
peers.
You
fetch
their
entire
graphs,
they're,
all
the
pieces
you
don't
have,
then
you
run
your
own
merge
function
and
then
you
would
broadcast
and
tell
everyone
results
if
that
makes
sense
like
it's
a
it's
a
separate
step,
but
like
you
have
one
step,
is
you
pull
things
in
next
step?
Is
you
merge
and
then
you
rebroadcast
so.
F
The
assumption
would
be
that
that
that
everything
is
quote
like
this
system
works
right
now
and
I
have
something
that
basically
does
this,
but
it,
but
it
involves,
you
know
the
to
user,
sending
their
full
graphs
to
each
other
right,
as
opposed
to
you
know,
having
some
protocol
for
communicating
and
saying
hey,
which
chunks
of
this
graph
do.
I
actually
need
to
send
to
you.
It
turns
out
there's
only
like
five
changes.
All
the
way
down
here,
I
need
to
send
I,
don't
need
to
send
all
of
them.
C
So
in
the
in
the
new
replication
repository
I,
so
I'm
working
on
a
simulator,
so
we
can
simulate
these
conditions
and
these
types
of
data
structures,
but
like
for
now,
I
think
that
we
should
just
create
new
issues
to
describe
some
of
these
cases
that
we
make
sure
that
we
capture
them
and
can
work
on
them
later.
Because
I
have
thought
about
this
case,
where
you
have
essentially
what
is
like
an
append-only
structure,
and
so
what
ends
up
happening
is
that
the
the
route
is
like
always
new
and
changes
happen.
C
C
It's
like
a
very
sort
of
interesting
different
case,
whereas,
like
we're
used
to
looking
at
these
graphs
for,
like
oh,
the
Eid
change
for
the
root
of
the
graph,
I
need
to
start
looking
through
it
before
I
know
what
has
changed
and
what
I
have,
whereas
like
in
certain
cases,
you
actually
can
predict
when
the
other
remote
side
is
parsing
through
the
graph.
When
then,
we'll
need
to
stop
so
that's
that's
like
we
can
come
up
with
really
cool
optimizations
there.
We.
F
A
I
just
would
say
that
we
are
kind
of
from
running
out
of
time.
Okay,
so
because
I
want
to
keep
it
to
30
minutes,
and
so
is
the
plan
now
basically
to
split
these
things
up
a
bit.
So
we
worked
on
this
new
fabrication
repository
and
you
basically
split
out
all
those
parts
that
we
want
to
like
want
to
see
in
crossing
into
issues
and
then
go
from
there.
Basically,
this
is:
is
this
a
workable
approach
or
I.
F
C
Let's
just
start
listing
use
cases
and
then,
as
we
have
solutions
to
some
of
these
problems,
we
can
identify
which
ones
they
fix
and
which
ones
they
don't.
But
saying
like.
This
is
out
of
scope
for
just
like
the
next
thing
that
we
do
doesn't
seem
like
it's
going
to
scale
very
well,
when
we
know
that
we're
gonna
have
to
have
multiple
replication
schemes.
Does
that
make
sense,
but.
E
C
C
C
C
And
if
we,
if
we
literally
just
said
like
oh
here's,
all
the
things
that
don't
replicate
well
today
with
ipfs
we're
going
to
find
actually
like
some
of
these
are
because
of
bugs
and
some
of
them
because
of
theoretical
limitations
and
how
we
design
work
all
right
and
there's
not
a
good
way
to
separate
those
right.
Now.
E
C
Yeah
yeah
yeah,
so
that's
I'm,
building
the
simulator
to
do
that.
So
yeah
we'll
have
a
path
to
that.
I
think
to
focus
question.
Yes,
we'll
continue
on
in
the
replication
repo.
We
need
to
start
a
couple
issues,
including
what
problems
particulars
are
meant
to
stop
all
rather
than
saying
graph
sync,
because
it
means
something
different
everybody.
It
would
be
nice
if
we
just
said
like
okay,
this
particular
network
API
that
we're
talking
about
doing
like
this
one
that
returns
you
if
you
give
it
enough
eld
selector,
it
just
returns
you
stuff.
C
What
is
this
meant
to
stall
like
what
are
the
use
cases
that
this
one
is
meant
to
solve?
And
then,
when
we
we
propose
the
one?
That's
just
for
the
Merkel
proof.
We
can
talk
about
just
use
cases
that
solves
as
well
and
then
then,
instead
of
talking
about
like
grass
link,
which
literally
means
a
different
thing
to
every
person,
I've
talked
to
you,
we're
talking
about
like
specific
ap
is
also
concurrent
to
all
of
this.
We
need
to
break
off
the
selector
stack
into
its
own
spec
and
how
to
thread
there
about
that.
A
Okay,
so
I
think
so
my
takeaway
is
that
I
think
it's
a
good
idea
to
basically,
so
we
have
so
many
use
cases
in
mind
that
you
start
with
yes
I
my
API,
and
then
you
just
describe
the
use
cases
it
solves
instead
of
the
other
way
you
found
I.
Think
that's
a
good
idea
to
at
least
yeah
cover
many
use
cases
yeah.
E
A
That's
good
all
right!
So
do
we
have
any
action
items
for
the
next
two
weeks
for
like
grafting,
so
I
would
say
ever
spend
my
time
around,
but
I
roll
guess.
I
will
closely
work
with
Michael
on
it
and
yeah
figure
something
out.