►
From YouTube: Libp2p Bi-Weekly Sync - September 09, 2019
Description
A
C
Okay,
cool
the
crib
pad
is
loading
right
now,
so
you
know
it
takes
a
little
while,
but
yeah
I
mean
basically
I
got
a
bunch
of
things
that
were
kind
of
on
the
house
standing
agenda
finally
to
go
through
last
week,
so
the
visualizer
projects
are
today
we,
the
final
security
audits
that
will
produce
a
report
started.
It
starts
tomorrow
and
yeah
I've
run
through
a
bunch
of
candidates
to
try
to
onboard
more
people.
Is
it's
pretty
slow
going
to
have
to
look
look
at
every
person's
github
and
fig
tree?
D
D
E
B
B
Then
working
a
lot
on
the
a
single
weight
refactor
for
Libby
2p,
to
focus
more
on
that
this
week.
With
that
we're
doing
a
couple,
internal
updates
to
the
pdb
horror
as
well
like
we're,
also
adding
a
upgrader
that
we're
going
to
pass
the
transport
similar
to
how
it
go
would
be
to
be,
has
passed
an
upper
greater
for
the
connection
and
so
far
just
locally
working
on
the
refactor.
That's
cleaned
up
a
lot
of
that
logic
for
Jas,
so
I
think
that
should
be
friendlier
to
the
user
downstream.
F
E
C
E
Alongside
that,
I've
been
just
kind
of
investigating
security
options,
yeah
for
the
message-oriented
transports,
the
ones
I'm
looking
at
our
DTLS
and
noise,
primarily
I,
had
a
chance
to
see
CLS
and
see
how
it
could
potentially
play
interplay
with
something
like
this.
There
was
also
like
kind
of
promising
and
kind
of
frustrating,
so
there'll
be
more
on
that
later
and
yeah.
A
lot
of
wonderful,
a
new
contributor
and
the
DHG
and
up
next
yeah
I,
just
want
to
kind
of
finish,
get
that
a
surveying
proposal
finalized,
get
that
first
step
going.
E
G
G
Spec
did
a
few
interviews
and
did
if
you
read
me
edits
and
also
dug
into
gossips,
which
I
I
learned
new
things
by
rereading
the
spec
had
I
hadn't
read
it
for
a
while
and
I
really
want
to
understand
it
so
that
we
can
were
in
in
a
few
weeks
we're
gonna
be
contracting
out
to
build
like
a
gossips
of
explainer,
like
it
really
kind
of
goes
in
detail
about
how
it
works,
and
so
I
want
to
be
able
to
support
that,
and
then
my
next
thing
that
I
would
really
like
to
focus
on.
G
G
For
the
next
week
with
you,
and
hopefully
we
could
get
a
working
draft
up
and
a
couple
like
within
that
time
range
and
then
I've
also
noticed
that
I
like
pulling
a
read
me
off
the
stack
every
day
or
so,
and
just
getting
some
edits
in
in
place.
Because
it's
a
nice
momentum
builder,
so
I'm
going
to
try
and
keep
that
up.
But.
C
E
H
H
H
Have
we
already
got
like
pretty
good,
pretty
good
feedback
and
positive
signs
emerging
well
from
that
last
weekend,
where
two
or
three
it's
I,
think
we're
now
up
to
four
clients
are
successfully
interoperating
by
unity
to
be
so
so.
This
is
awesome
like
I.
Definitely
expected
a
few
more.
You
know
words
in
the
interoperability
storing
there,
but
for
me
it's
been
more
or
less
unless
they
did
have
a
glitch
and
an
issue
with
the
way
that
with
multi-select
sorry,
what
multi-stream
we're.
H
Basically
some
implementations
were
expecting
the
multi
stream
preamble
plus
the
actual
protocol
selection,
and
this
is
actually
brass
trouble
and
there
weren't
acting
multi
stream
itself,
whereas
the
other
implementations
were
waiting
for
the
app
before
in
the
actual
protocol
selection,
so
I
mean
just
like
really
minor
stuff.
Quite
frankly,
even
Berlin
we
had
a
pretty
amazing
successes
there.
If
you
were
in
the
old
hands
or
a
few.
A
few
days
ago,
you
probably
heard
about
that.
We've
got
white
detectors.
The
contributor
that
built
those
continues
contributing.
H
So
this
is
also
he
is
continuing
to
evolve
the
y
front
effectors
themselves.
We
he
created
a
fork
of
Sakaya
that
has
a
key
locking
feature,
so
it
dumps
the
keys
the
symmetric
keys
to
a
file
which
then
the
detector
hooks
up
to
and
that
way.
Basically,
it
manages
to
dissect
traffic.
We
are
now
turning
that
into
a
layered
pipeline
so
that
whatever
cyclist
X-
and
it
sends
it
down
to
downstream
to
other
wife
to
other
secretary
such
that
they
can
continue
unpacking
that
payload
iteratively.
H
So
then
there
is
a
flex
and
there's
gonna
be
all
kinds
of
application
protocols
as
well
he's
refactoring
that
into
a
multi-select
multi-stream
things
such
that
the
multiscreen
Y
shared
a
sector
would
be
an
entity
of
itself
such
that
other
applications
can
then
match
onto
it
and
say:
hey
sorry,
other
dissectors
can
latch
onto
it
and
say:
hey
I'm
able
to
handle
this
particular
protocol.
So
the
heuristics
of
selecting
dissectors
will
become
a
lot
more
efficient.
H
But
that's
hit
me
up
if
you're
interested
in
a
story,
messy
and
gossips
are
nice,
handshake,
gossip,
profiling
and
mtns.
These
are
things
that
we
got
out
of.
Berlin
also
worked
with
Mike
on
kicking
off
the
final
draft
data
first
iteration,
which
is
also
into
interviews
and
this
week
I'm
going
to
continue
focusing
heads
down
on
test
round
and
Mike
and
I'm,
probably
going
to
start
the
PM
refactor
this
week
or
the
next
week
today
are
going
to
be
working
with
that
and
just
want
to
give
a
heads
up
next
week.
H
E
F
Had
a
bunch
of
rest
I
would
really
love
reviews
on
the
CIDR
FC
I.
Don't
think
this
is
all
that
controversial
but
yeah.
Basically,
if
you
are
aware,
the
idea
is
to
encode
PR
IDs
as
C,
IDs
and
text.
This
makes
it
easy
to
tell
if
something
is
actually
a
PID
and
not
just
some
other
item,
both
a
hash.
It
also
it
gives
us
multi
basic
encoding
for
free
and
it
doesn't
change
anything
about
my
formats.
It's
all
just
in
text.
F
Second,
one
is
a
knod
time
out,
I'm
saying
and
go
ahead.
Defense
or
some
of
our
users
was
singing
goodness
we're
like
we
establish
a
connection
and
then
we
timeout
negotiating
the
multi
strain
their
technology
on
a
string
multiplexer.
This
is
after
doing
the
security
handshake.
The
thing
is
the
server
isn't
overloaded.
So
if
the
server
overloaded
I'd
say
well
like
maybe
just
taking
a
really
really
long
time
to
negotiate
the
like
to
do
the
security
handshake
and
then
it's
hang
up
my
time,
he
gets
the
final
hendrik,
but
yeah.
F
F
Finally,
there's
a
test
failure
on
the
stabilize
branch
and
go
confess,
so
something
is
wrong
when
stabilize
branch
it
doesn't
appear
actually
no
I
take
that
back
and
because
it's
also
going
wrong
with
the
master
branch
as
well,
but
this
didn't
wasn't
happening
before
I
updated
with
me
to
be
so
yeah.
This
may
be
related
to
do
you
see
changes
I'm,
not
entirely
sure
this
actually
edgington's
back
boo
this
one
for
now,
because
this
could
be
a
bit
squishy
I
have
to
do
more
testing
there.
E
Sounds
good!
Thank
you
very
much
and
I
think
the
last
thing
I
don't
know.
Mike
I
just
left
a
spot
on
the
agenda.
If
you
wanted
to
talk
about
the
retrospective.
C
Yes,
okay,
sorry
I
took
a
little
time
to
find
the
window.
Yeah.
Basically
I
put
up
a
q3,
retro
doc.
I
know
q3
is
not
over
yet
it's
still
like
a
month
away,
but
I
thought
there
was
a
desire
or
something
from
lab
OS
to
basically
do
the
retro
like
before
the
okrs
and
the
first
draft
of
the
okay.
Ours
is
due
on
Friday
or
something
so
I,
don't
know,
I
think
just
richer
on.
C
E
H
H
General
Martin
did
a
great
job,
that's
where
I
say
you
know
how
fragile
the
whole
of
releases
and
the
low
career
to
actually
make
a
release
just
by
creating
a
tag
and
pushing
it
creates
like
a
huge
surface
for
accidents
where
which
might
not
be
so
important
if
you're
just
pushing
a
minor
or
hot
release.
But
if,
for
whatever
reason
you
push
a
major
release,
then
that
is
gonna
alter
it
like
it's
gonna
make
all
your
downstreams
alter
their
their
employ
pots,
and
that
is
super
messy.
H
So
I
don't
really
know
like
how
to
proceed
with
this.
But
frankly,
we
do
have
an
immediate
problem
and
I
came
across
this
as
soon
as
I
agree
to
as
soon
as
I
upgraded
my
micro
installation
and
I
created
a
brand
new
project
and
I
imported
going
to
p2p
and
imported
six
point
0.23,
because
and
it's
what
what
go?
What
the
proxy
story
like
the
centralized
proxy
story,
so
yeah
I
think
so
pretty
bad
yeah,
I'd.
F
F
To
argue
it
from
their
side,
like
we've,
had
a
lot
of
problems
where
people
will
change
the
tags
on
us
and
we
don't
want
that
so,
like
the
idea
behind
this
is
like,
if
user
deletes,
attack
or
changes
something
or
does
something
sorry
I
like
that,
the
code
should
still
be
available.
That's
why
they
did
this,
and
that
makes
sense
because
like
going
pass,
did
or
like
or
420
doesn't
know
this
right,
but
421
doesn't
build
right
now
because
of
a
change
in
Bajor
like
you
just
you
can't
rebuild
it.
F
F
This
is
the
only
matter
but
I
again
I,
don't
think
there
was
a
good
alternative
solution.
I
just
don't
think
they
had
one
and,
like
all
the
metadata
they
get
is
like.
I
am
pulling
these
packages.
They
don't
learn
about
your
private
packages,
data.
Sorry
about
public
crash.
Is
you
fetch
through
their
system?
So
they
do
get
extra
stats,
but
it's
not
by
default.
H
F
F
C
F
H
H
So
that
we
create
an
entirely
different
namespace
for
us
when
we
could
start
afresh.
However,
if
somebody
still
has
to
pad
text
him
by
their
name
space
again
so
I,
don't
really
like
lesser
people
to
like
a
custom,
get
or
get
hot,
yes
together
and
that's
projected
protected
tags
or
something
like
that.
We
won't
be
able
to
reject
those
tags
on
well.
F
I
have
no
faith
and
give
no
getting
this
together
and
stuff
like
this.
It's
slightly
it's
getting
better,
but
they
yeah.
This
has
been
something
it's
no
necessary
for
years
and
they
haven't
so
I.
One
thing
we
can
do
like
if
we
move
to
get
lap
which
I
think
we
change
URLs,
we
could
probably
just
do
that
and
just
near
it
get
up.
Then
we
get
protected
tags.
F
So
you
can
like
it's
not,
unfortunately,
I
can't
say
like
like
if
a
tag,
if
this
tag
exist,
I'll
have
this
tag,
that's
ideally
what
we'd
have,
but
we
can't
just
say
no
much.
Basically,
no
one
can
create
version
tags
and
then
we
have
a
bot
release
version.
You
create
an
issue
for
the
version
decided
to
bot
the
bot
face
version
mmm-hmm.
H
But
if
we
have
a
github
action
that
reactively
removes
tags
like
if
it
does
that
very
fast,
are
you
know,
there's
gonna,
be
a
timing,
condition
of
course
involved
and
there
might
be
like
in
just
the
one
bad
case
where
you
know
that
the
hook
is
little
whatever,
and
somebody
tries
to
fetch
that
that
module
before
the
actually
removes
that
tag
is
gonna
break
the
namespace
again.
So
it's
definitely
not
a
good
solution,
but
I
wanted
to
put
it
out
there.
I.
H
Another
another
possibility
here
is:
you
know
this,
probably
like
it's
just
for
a
consideration
as
that,
let's,
let's
talk
about
other
possibilities,
probably
not
the
not
the
best
one,
but
for
completeness
sake,
we
could
operate
a
world
github
and
have
github
be
the
mirror
of
our
github.
This
is
what
many
organizations
and
cuz,
literally
it
davido
stopping.
So
whatever
this.
F
H
F
Was
never
finished.
Fourth
gitlab
like
if
I
use
get
lemon,
we
would
get
our
own
custom
subdomain
there.
We
probably
all
see,
is
pretty
important.
So
if
you
defeat
io
/
something
another
free
package
know
yeah
that
would
like
it's.
The
poem
is
like
that's
a
lot
of
work
to
do
that
migration
I
think
we
should
do
it
eventually
anyways,
just
cuz
like
it
extracts
us
from
like
it,
helps
hold
and
get
up
as
before.
It's
late,
not
really
delivering
in
terms
of
what
users
need.
I
I
have
a
couple
other
points
in
one
row.
Is
this
the
other
one
not
related
the
one
worded?
This
is
a
fun
point.
I
will
just
say
that
this
is
all
like
after
we
feel
your
pain
because,
like
in
the
NPM
world,
the
problems
are
exactly
the
same
but
I.
This
is
karma
because
we
talked
about
the
versions
three
years
ago,
like
supposedly
versions
and
go
together
like
we
are
all
playing,
because
so
I
know
it's
ironic.
I
The
other
point
is
so
I'm
I
just
wanted
to
kind
of
like
catch
up
with
everyone,
and
you
know
like
podcast,
like
what
the
heck
am
I
going
to
be
doing
now
and
essentially
and
I,
don't
think
like
it's
wildly
known
in
the
organization.
It's
something
I
also
discovered
only
in
July,
but
there
was
an
intent
or
desire
back
in
June
to
create
like
a
the
simulator
at
some
point,
as
a
research
group
right
now,
a
research
lab
within
the
PowerPoint
project.
I
There
was
a
proposal
within
the
research
team
to
create
like
something
that
can
focus
on
networks.
Research,
so
think
about
problems
like
HT,
scalability,
just
privacy,
preserving
networks,
nuclear
channels
and
so
on,
and
so
just
to
give
you
like
a
little
bit
of
like
context
from
the
last
months
is
a
key
in
June.
Was
this
conversation,
this
research
p.m.
I
I
This
networks
lab
and
so
I
spent
today
about
the
time
just
like
articulating,
like
grabbing
from
all
the
conversations
back
in
July
and
all
the
notes,
Wheaton
and
all
the
discussions,
it's
like
what
that
works
out
is
and
what
it
is
going
to
focus
and
what
it
is
not
going
to
focus,
and
now
it
is
going
to
support
the
project
teams.
Then
we
especially,
of
course,
we
peer-to-peer
ipfs,
you
might
have
felt
with
the
energies
in
the
last
few
weeks,
yet
it
has
been
doing
an
amazing
work.
I
Oh
I
think
all
the
topics,
like
all
the
open
problems
and
all
the
topics
for
protocol
design.
There
is
this
intense
to
organize
a
research
intensive
event
still
on
the
words
I'll
actually
have
to
catch
up
with
the
any
statistic
like
super,
sometimes
in
space
but
yeah,
and
so
there
is
this
networks.
Research
group
that
is
forming
and
intent
right
now
is
to
have
a
team
that
eventually,
in
the
future,
has
the
capability,
the
resources
to
pick
up
on
any
open
research
problem
and
go
from
like
finding
collaborations
putting
out
research
grants.
I
It's
finding
the
problem
doing
the
first
like
vertical,
like
surveying
the
state-of-the-art
testing,
what
is
out
there
and
so
on
and
so
kind
of
like
helping
both
like
you
personally
to
appear
and
and
I
take
some
of
that
work
from
your
guys's
plate.
Of
course,
the
team
is
pretty
much
just
me
and
yannis
right
now
like
doing
that,
so
so
like
it's
not
just
by
having
it
seemed
like
the
problems
get
solved,
so
this
will
have
to
be
gradually
like
I
think
that
we
get,
but
also
it
is
something
that
is
kind
of
new.
I
H
Just
want
to
give
you
a
positive
feedback
on
on
what
you're
doing
here.
I
think
we
definitely
need
to
be
a
lot
more
Theory,
first,
with
many
of
the
changes
that
we
decide
to
make,
especially
because
we
are
many
times
playing
about
a
year,
which
is
which
is
great,
because
you
know
it's
like
there's
a
hyper.
H
We
want
we
want
to
play
with
and
so
on,
but
at
the
end
of
the
day,
it
is
true
that
the
size
of
the
network
and
the
respondent
amount
of
projects
that
are
depending
on
the
work
and
I
BFS
is
doing
Libby
to
be
is
doing
especially
Libby
to
be
now
has
become.
You
know,
a
top-level
project
that
is
being
adopted
by
you
know
nourished
platforms
and
networks
out
there.
We
definitely
need
to
get
a
lot
more
like
research
first,
with
what
many
of
assistance
and
subsystems
that
we
develop.
H
So
this
is
a
great
initiative
in
terms
of
of
testing.
I'm
gonna.
Send
you
the
engineering
document
here
that
summarizes
the
design.
I,
don't
know
if
you
have
it,
but
it
basically
it's
to
find
out
that
we're
planting
here,
I
think
that
would
be
useful
on
the
other
hand
of
telemetry
I,
think
it's
gonna,
it's
gonna,
be
so
very
important
as
well.
H
I
go
as
I
was
listening
to
a
podcast
a
few
a
few
weeks
ago
about
a
guy
about
how
to
make
changes
in
in
Facebook
and
how
that
will
not
changes
and
how
they
decide
to
go
because
Facebook
doesn't
test
their
changes
at
scale
in
general.
They
don't
test
the
changes
before
they
release
them.
They
have
a
very
controlled
release,
process
and
telemetry
and
like
understanding
how
this
change
is
actually
rolling
out
how
systems
are
being
affected
by
them
and
how
the
change
is
being
perceived
by
the
users
as
well
to
decide.
H
You
know
whether
to
continue
rolling
it
out
or
to
like
continue
expanding
the
sample
or
not
so
I
mean
that
is
potentially
another
avenue
for
us
to
explore.
You're,
not
burning
out
changes
like
the
changes
that
we're
making.
Besides
attempting
to
simulate
doing
huge
scale
simulation
of
you
know,
change
sets.
We
can
also
have
do
canary
deployments
in
a
very
controlled
fashion
or,
like
you
notice,
there's
a
lot
of
things
that
we
can
incorporate.
All
of
that
predicates
on
having
a
telemetry
protocol
such
that
we
can
actually
sense.
I
Yeah,
absolutely
so
what
you're
saying
is
that
there
is
all
of
these
also
research,
in
some
sense,
around
engineering,
best
practices,
if
flowing
west
systems,
and
not
even
by
picking
the
example
of
peer-to-peer
network
peer-to-peer
SATs
from
the
past,
like
for
at
5:00
and
so
on.
All
of
those
projects
are
like
multiple
contributors
over
multiple
actually,
multiple
backgrounds.
They.
H
I
Also
learn
a
ton
of
things
to
that
process.
This
is
something
that,
for
example,
like
this
network's
lab,
could
go
in
advance
and
kind
of,
like
just
gather
all
of
the
warnings,
and
also
be
like
a
team
that
can
like
the
way
that
IGN
is
the
like
back
in
July.
I,
basically
brought
all
of
these
experience
and
knowledge,
and
a
workshop
and
complete
believer
to
the
team
for
the
best
training
as
a
source
for
inspiration.
E
I
I
This
makes
a
little
bit
more
sane
approach
between
like
product
oriented
teams
like
deployment
teams
versus
like
teams
that
just
need
to
I,
like
this
longer
like
like
longer
schedules
to
to
like
explore
this
research,
and
it's
at
which
all
these
collaborations,
but
but
then
again,
like
I,
expect
also
a
lot
of
overlap
between
some
people
that
will
be
across
both
both
sides
of
the
equation.
Just
like
a
more
about
a
process
like
formalization
and
allocating
proper
resources
to
to
all
the
spectrum.
J
To
could
two
quick
points
thanks
for
somebody
to
wait
yeah,
we
should
progress
from
that.
Just
a
quick
thing
that
we
had
in
in
Berlin,
we
had
a
meeting
between
a
few
of
us
role
was
a
Mike,
was
a
George
won
and
we
kind
of
tried
to
structure
and
become
a
team
in
an
efficient
way,
and
we
were
discussing
with
role,
for
example,
the
fact
that,
depending
on
what
is
the
problem
at
hand,
you
know
it
might
take
me
a
day
to
do
something
that
someone
else
could
take
two
hours
and
the
other
way
around.
J
So
for
each
one
of
those
problem,
depending
on
the
experience,
we
should
be
flexible
to
assign
they
issue,
at
least
as
a
first
step
to
someone
who
has
got
prior
experience
and
who
is
bringing
background
and
then
pass
the
ball
to
other.
So
this
would
be
both
from
the
research
team
to
the
engineering
team
and
the
other
way
around
and
fit
both
ways
so
that
you
know
until
they
should
get
further.
So
this
is
going
to
be
a
thing
yeah.
It's
also
in
the
document
that
I
listed
all
the
opening
problems.
J
The
thinking
behind
was
kind
of
like
that,
and
it
will
have
to
be
done
together.
Yeah
and
another
side
comment:
I've
had
it's
not
only
Facebook
I've
heard
many
actually
other
companies
that
say
when
they
push
something
into
production
in
order
to
save
cost
and
time
they.
You
know
they
just
don't
do
any
testing
and
the
argument
from
an
engineer,
a
friend
of
mine.
You
know,
we've
got
this.
You
know
user
days
of
hundreds
of
thousands
of
people
to
test
it
for
us.
Why
should
we
do
it?
J
You
know
we
just
get,
but
these
you
know
we
get
the
measurements
and
we
see
what
needs
to
be
fixed
and
we
fix
it
a
day
later,
instead
of
spending
our
own
day
or
our
own
week
to
do
tests
and
fix
before.
But
these
these
you
need
to
have
the
telemetry
protocol
there,
and
not
only
that
it
should
be
not.