►
From YouTube: ws-stardust for js, metric sources, subdomain gw. config - IPFS GUI and Browsers Weekly 2020-03-11
Description
Meeting Notes: https://github.com/ipfs/team-mgmt/pull/1124
About IPFS GUI and Browsers Weekly: https://github.com/ipfs/team-mgmt/issues/790
IPFS Mirror: https://ipfs.io/ipfs/bafybeiaferurwsnid55nslvyrjrnx5okzv7333hhe6wfyebfa7gdjxd3v4
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
B
C
A
Super
cool
I
talked
to
Jacob
before
on
the
web,
RTC
one,
the
docker
image
we
like
not.
The
plan
is
for
those
docker
images
to
be
not
only
useful
for
our
infra,
but
for
like
anyone
who
wants
to
run
their
own
and
don't
want
to
care
about
all
the
dependencies
that
you
could
just
run
a
docker
image
in
their
own
right,
yeah,
yeah,.
D
C
I
mean
basically
both
just
between
Jessica
fests.
We
have
browser
examples
which,
before
the
refactor
were
using
WebRTC
star
and
the
WebSocket
star
for
transporting
discovery
of
peers
and
basically
I
after
the
refactor,
they
got
just
working
using
a
web
RTC
start
and
now,
in
the
examples
where
I
did
was
two
initial
files
just
remove
web
RTC
star
and
just
rely
on
Stardust
and
check
if
everything
was
working
as
before,
and
it
is
and
then
I
basically
tried
both
together
web
RTC,
star
and
star
list,
and
now
in
the
current
PR
state.
C
The
goal
is
to
basically
people
integrate
starless.
The
code
is
like
comments
and
in
the
example,
people
are
guided
on
how
to
configure
them,
but
to
me
Jake
were
discussing
if,
when
we
have
with
these
deployed-
and
we
don't
need
to
like
go
with
peer
ideas
and
stuff
like
that,
we
can
eventually
just
to
make
it
to
work,
and
people
just
can
run
the
example.
Instead
of
needing
to
do
this
initial
configuration
part.
D
C
C
E
A
A
Historically,
we
had
the
situation
when
changes
were
made
to
Jessica
fess
or
Jace
ipfs
httpclient,
but
we
forgot
to
update
examples
and
people
look
at
examples,
and
that
was
a
pretty
bad
experience.
So
now
we
got
that's
a
part
of
our
CI
setup.
E
CI
will
fail.
If
change
you
made
broke
examples,
so
you
need
to
like
update
examples.
You
can
make
breaking
changes.
So
that's
why
it's
important
yeah
awesome.
Thank
you.
So
much
I
did
yep
for
some
reason.
I
always
struggle
to
spell
it
it's
as
short
of.
D
He
thought
we
would
be
interested
in
as
pl
so
I'll
link
to
that
document.
The
hair
in
the
notes,
as
well,
which
is,
would
be
a
narrower
field
of
view
around
his
experience
of
what
he's
recommending
Crick,
cancelled,
I,
hope,
quick
stock
canceled
we're
just
about
to
ship
it
yeah,
but
I.
D
It's
doing
so
I
think
more
than
anything
I
think,
there's
probably
gonna
be
some
public
communication
like
we'll
announce
this
PL
who's
going
to
be
there
and
what
we're
just
and
doing
well
we're
thinking
about
running
an
events
to
like
you
know,
like
maybe
a
happy
hour
or
meet-and-greet,
that
type
of
thing
I
think
nothing
crazy,
but
I
I
think
it
furred
furred.
For
this
group
number
one
is
get
familiar
with
the
set
of
content.
I
look
at
the
YouTube,
the
ITF
YouTube,
like
every
session,
is
recorded,
so
I
think.
D
Well,
we
would
be
cool
is
like
this
list
here
of
the
schedule
is
figure
out
which
one
of
these
we
all
maybe
want
to
watch
later
and
then
talk
about.
So
we
can
like
pick
some
of
those
sessions
that
we
think
would
be
interesting
to
learn
about
and
then,
in
a
later
browser
and
connectivity
meeting
set
aside
15
minutes
or
something
like
that
and
kind
of
review.
What
we
learned
from
watching
the
different
videos
or
specific
videos
I
like
when
the
wind
packaging
defense
of
web
packaging
was
happening.
D
D
So
you
had
people
from
CloudFlare
and
Mozilla
and
Google
battling
it
out
in
an
open
discussion
which
is
really
fascinating
to
see
to
see
that
back
and
forth
and
the
nature
of
what
the
critical
discussion
actually
looks
like
in
some
of
these
sessions
and
then
there's
also
separate
birds
of
a
feather.
So
you
know
like
we
can
do
something
like
run
of
birds
with
other
around
people
who
are
interested
in
p2p
network
protocols,
and
things
like
that.
D
There's
also
like
you
know,
would
be
great.
Maybe
we
should
have
Rihanna's
come
and
talk
as
a
little
bit
about
it
as
a
guest
speaker
here
for
this
meeting
to
talk
to
folks
about
his
experiences
so
far,
because
there's
basically
like
two
parts,
there's
the
IETF
there's
also
the
IRT
F
and
one's
more
academic
cut
ones
more
in
like
prep
protocols
and
standards
that
are
induced
in
production
today.
So
there
you
have
these
also
like
even
within
the
whole
overall
organization
there.
D
There
are
different
mega
subgroups
there
and
it's
also
very
contentious
like
when
jessica
introduced
us
to
kind
of
one
of
the
one
of
the
fathers
of
the
organization
and
and
he's
he.
He
had
some
interesting
feedback
on
like
just
how
useful
it
is
for
different
things
and
where,
where
things
can
get
stuck
in
ITF.
D
Well,
where
this,
the
current
stims
effort
really
can
really
be
us
a
you
know:
anti
be
the
opposite
of
momentum,
we're
getting
a
bunch
of
different
people
to
try
and
agree
on
something
can
sometimes
be
not
the
most
effective
way
of
driving
it
forward.
So
it's
good
to
understand,
like
the
relative
of
pros
and
cons.
D
Are
there
but
like
just
being
in
this
conversation
and
for
all
of
us,
you
know
if
we
are
joining
being
present
watching
how
it
works,
learning
what
we
can
it's
gonna,
be
the
priority,
as
opposed
to
say
any
specific,
any
specific,
individual,
spike
or
goal
there.
I
think
you
know
be
one
of
us
hanging
out
at
these
meetings,
just
learning
and
soaking
in
as
much
as
possible
meeting
a
bunch
of
people
making,
connections
and
understanding
where
we're
we're.
You
know
we
have.
We
have
kind
of
three
ways
that
we
do
things
right.
D
One
is
when
is
integrating
with
existing
software.
Two
is
ignoring
all
existing
software
and
run
your
own
stuff
and
writing
around
it.
And
then
third,
is
these
kind
of
like
participation,
these
long-running
broader
discussions
around
the
art
of
how
software
changes
over
time
and
how
we
think
about
it
as
a
broader
group
of
practitioners,
so
understanding
where
places
like
the
ITU
have
in
certain
groups
or
people
there
can
help
us
with
making
those
decision.
F
A
For
sure,
I
think
it's
very
useful
to
watch
like
totally
slick
as
a
homework,
kick
some
sessions
who
interests
and
she
see
how
they
they
work
like.
What's
the
dynamic,
especially
the
one
dietrich
missions
for
the
packaging
from
google,
it's
like
sort
of
on
an
extreme
parts,
but
it
gives
you
a
good
feeling
of
how
open
and
how
open
that
discussion
is.
And
when
you
like
blink,
a
proposal,
you
need
to
be
either
prepared
or
you
need
to
be
prepared
for
defense
or
for
something
like.
What's
happened
up
super
useful
just
to
prepare
yourself
or.
B
D
Alright,
I
just
wanted
to
do
a
quick
review
with
this
group
to
see
whether
or
not
these
are
still
the
these.
These
goals
make
sense.
There's
this
I
know
you
know:
we've
been
tracking
and
ladles
been
pushing
hard
on
on
this
as
a
high
priority
and
really
affects
our
whole
ecosystem
kind
of
how
we
ship
IP
fuss,
the
connectivity
meet
means.
We
just
talked
about
Stardust,
it's
fantastic
that
looks
like
that
is
on
tracking
and
landing
soon.
D
A
C
D
D
F
So
this
kind
of
goes
in
line
with
the
like
web
RTC
to
read
signaling.
So
there's
like
there's
two
tracks
here
and
we'll
probably
focus
on
the
first
track,
which
is
like
signaling
web
RTC
distributed,
so
that
we
can
just
have
that
over
an
intermediary
node
without
relying
on
signaling
servers
and
then
there's
like
a
more
generic
version
of
it,
which
is
just
like
kind
of
signaling
for
hole
punching
in
general,
because
there
are
also
like
we
might
want
to
use
generic
signaling
over
a
third
party
relay
to
do
like
TCP
hole
punching.
F
So
there's
the
option
to
potentially
do
that
as
well.
But
that's
like
a
more
that
will
probably
punt
so
that
we
can
get
the
web
RTC
think
sooner
and
then
abstract
that
out
onto
a
more
generic
p2p.
Signaling
thing.
So
that
is
still
targeting
for
4qq
along
the
same
lines.
Needed
for
distributed
web
receive
will
be
the
direct
connection
upgrade
so
that
we
can
use
a
relay
as
our
signaler
and
then
dial
directly
to
each
other
over
that
and
then
Kidal.
F
The
relay
connections,
assuming
we
don't
need
them
anymore,
but
we'll
probably
keep
them
for
listening
and
then
on.
A
completely
different
track
from
signaling
is
the
the
pure
exchange
protocol,
which
is
still
we
should
still
do
in
q2,
because
that
has
just
a
lot
of
value
in
general
for
being
able
to
having
another
method
of
discovering
peers.
When
we
have
limited
connectivity
in
the
browser.
F
F
This
defect
will
be
in
a
draft
state
and
then
getting
that
proof
concept
so
that
we
have
like
an
implementation
of
it
and
then
like
flushing,
that
out
so
that
we
can
move
that
speck
potentially
doing
like
q3
to
like
recommended
because
then
like
in
q3,
we
have
like
a
full-blown
working
implementation.
Okay,
great.
D
A
D
B
D
So
I
think
that
the
key
here,
the
next
piece
that
I
need
now
that
you
will
review
these
and
thought
they
made
sense-
are
actually
to
identify
that
data
sources
for
each
one
of
these.
So
down
here,
reline.
Well,
that's
21
through
30,
are
a
set
of
metrics
that
we've
identified,
but
in
order
to
be
able
to
track
those,
we
need
to
be
able
to
actually
wire
up
that
source
of
data
into
grata
and
they're
coming
up
with
tried
to
make
it
as
easy
as
possible.
D
We'll
do
that,
but
the
first
step
is
to
actually
assign
these
to
this
group
of
people
here
who
would
know
where
this
data
is
and
then
actually
spec
out
like,
where
what
would
the
API
calls
that
we
need
to
be
e?
What
are
the
points
that
we
need
to
hit
where's
that
data
collectable
wish,
and
where
can
we
write
it
to
the
actual
and
nuts-and-bolts
middle
of
metal
collection
of
this
data?
So
I
think?
Well,
let's
spend
a
couple
of
minutes
here.
I
know
I,
know
Lytle.
D
A
C
F
This
we
kind
of
commented
on
like
we
don't
we
don't
have
any
public
relays
at
least
further,
that
j/s
uses
it
all
so
I
mean
like
right
now.
This
is
zero
and
it's
going
to
continue
to
be
zero
for
a
while
likely
until
we
do
distribute
it
signaling,
because
we'll
need
to
have
of
like
limited
relays
in
place
before
we.
We
measure
that
okay.
D
B
C
F
I
think
a
better
thing
to
track
here
would
be
like
Stardust
users
and
WebRTC
star
users.
So
one
of
the
things
that
we
will
be
tracking
with
WebRTC
star
is
we'll
have
metrics
being
dumped
to
Prometheus,
not
only
for
like
the
load
on
the
server,
but
also
like
how
many
joins.
Are
we
getting?
How
many
join
errors?
Are
we
getting
and
like
we'll
be
able
to
see
some
of
those
actual
numbers
so
we'll
be
able
to
pull
those
out
and
dump
them
into
the
dashboard.
D
So
yeah
up
take
the
description
and
then
can
you
write
in
the
data
source,
the
description
of
where,
like
of
that,
what
date
is
gonna
be
in
prometheus
and
it
sounds
like
we're.
We're
too
early.
Are
we
too
early
for
that
or
when
will
that
go
live
and
when,
when
will
the
or
there
actually
be
that
data
available,
yeah.
F
I
got
blocked
on
some
permissions,
but
I
am
finishing
up
the
docker
hub
block
on
there
now
so
I'll
finish
up
the
docker
hub,
deploy
of
the
container
so
we'll
have
that
thrown
up
with
docker
hub
and
then
I
will
sync
up
with
infra
and
they'll
probably
need
like
a
week
or
two
to
handle
provisioning
and
then
they'll
just
pull
down
the
e
the
docker
hub
image
and
then
built
the
deploys
from
there
and
then
once
we
have
that
we
will
be
be
good
to
start
pulling
that
so,
hopefully,
all
in
all,
I
would
say
like
three
to
four
weeks
on
the
high
okay.
D
C
That
would
be
when
we
have
persisted
pierced
or
that
we
would
not
really
need
to
rely
that
much
on
would
strap
nodes.
So
the
metric
could
eventually
be
the
number
of
connections
that
the
bootstrap
nodes
needs
to
have
open.
But
it's
self
so
I'm
not
sure
if
it's
achievable,
because
if
we
have
more
connections,
also
kind
of
a
sign
know
that
we
are
having
more
users
so
I'm,
not
sure
which
will
be
the
good
metric
here.
C
F
This
is
this
guitar
chicks
I
mean
you
could
look
at
like
like
unique
peer
IDs,
but
I.
Don't
know
if,
like
what
kind
of
data
we
can
get
in
terms
of
like
how
often
are
the
same
users
coming
in
and
talking
to
us
like
looking
at
peer
IDs
over
time,
because,
ideally
bootstrap
nodes
should
be
seeing
the
same
peers.
F
Less
right
like
unique,
it's
from
peers
should
be
really
low,
like
I
should
join
connect
to
the
network,
get
peers
and
then
in
reality,
they,
like
I
shouldn't,
need
to
talk
to
the
mixed
rap
nodes
again,
unless
I
run
into
situation,
where
all
of
the
peers
in
that
I
didn't
know
are
like
until--well
now
and
so
now.
I
need
to
go
back
to
the
peers.
That
I
know
are
good,
which
are
the
bootstrap
peers.
It.
F
D
A
F
I
mean
there.
There
could
also
be
an
interesting
correlation
here
in
terms
of
like
looking
at
the
WebRTC
star
and
start
a
statistics
in
conjunction
with
load
on
the
bootstrap
servers,
because
if
we,
if
those
are
high
and
like
our
peer
store
storage,
is
working
well
then
like
in
theory,
we
should
have
like
a
lower
that
should
correlate
to
some
lower
hit
rate
to
some
degree,
but
then
yeah
like
popularity.
It
will
eventually
cause
problems
with
that
number,
so
I
think
in
the
interim.
It
could
be
a
very
interesting
number
but
long
term.
A
Make
sense
it
sounds
like
either
either
we
track
likes
track
like
we
store
peer
IDs
over
time,
and
he
got
out
the
way
of
coming
up
with
a
number
based
on
his
story,
like
the
historically
seen
Pierre
IDs
over
time
or
this
one
he
like
needs
to
be
like
it
needs
to
be
refreshed
anyway.
But
the
question
is
easy:
if
we
do
that,
would
that
be
even
a
meaningful
metric.
A
I
feel
like
I,
feel
the
bootstrap
node
load
it
all.
It
does
may
not
belong
I'm,
not
sure
like
it
sounds
more
like
something
contents.
Routing
group
may
be
interested
because
that's
not
like
specific
to
a
browser
transports.
It's
a
general
health
of
the
networks,
although
yeah
the
basic
electronic
unique
period
is
over
time
that
feels
like
high-level
health
analysis
of
the
entire
DHT
or
Olek
network
yeah.
A
D
Okay,
can
one
of
you
write
a
summary
in
the
data
source
of
why
why
this
would
be
meaningful
and
why
it's
hard,
you
know,
then
we
win
Li's.
Have
these
notes
captured
around
we're
not
implementing
yet
spec
blockers?
This
is
this
is
an
interesting
one.
How
would
we
measure
that
and
why
is
it
meaningful.
F
G
F
D
A
D
D
D
I
started
again:
I'm,
not
sure
that
this
is
a
metric.
We'd
want
to
chase
numbers
with
something
like
this,
but
III
like
it
as
a
as
a
reflection
of
how
well
we're
able
to
scale
our
ideas
out
to
the
community.
I
think
that's
something
good.
F
Yeah
I
think
this
is
a
thing
of
like
what
is
it's
like.
The
interesting
match
was
like
how
many
dev
grants
are
we
creating
not
that
high
or
low
art
is
good
but
like
how
many
are
recreating
of
those?
How
many
have
gotten
picked
up
like
how
quickly
have
they
gotten
picked
about
and
then
how
many
are
getting
completed,
because
just.
D
So
I
think
those
like
everything
you
listed
definitely
are
things
we
want
to
track,
but
the
whole
grant
program
level
would
probably
not
at
the
browser's
I'd.
Rather,
you
all
focus
on
on
the
browser
specific
bits.
You
know
I
think
I.
Think
I'm
you're
gonna
delete
this
one
as
well.
For
now,
and
and
just
have
it
be
a
Trinity,
ideally,
is
something
that
we're
practicing
as
a
as
a
team
as
a
way
to
scale
which
we
already
are
and
Lydell
spending
a
bunch
of
time
on
gun
grids
really
and
stuff.
Already,
the
this.
G
G
D
A
Yeah,
it's
like
a
it's
a
vanity
one
unless
we
like
frame
it
assuming
the
native
protocol
handler
for
browser
extension
lands
that
metric
would
reflect
how
many
browser
vendors
adopted
those
patches
that
were
deliverables
from
that
grant
yeah.
But
it's
like
it's
still.
It's
a
vanity
metric
that
requires
additional
description
to
make
it
more
meaningful,
which
I
don't
really
feel
good.
Yeah.
D
A
A
D
D
But
we
have
to
have
something
to
keep
the
fire,
so
it
sounds
like
these
are
the
WebRTC
star
will
be
a
metric
for
q2.
For
sure,
that's
great
companion,
install
is
definitely
something
something
we
want
to
track.
I
feel
like
we
need.
We
can
do
these
these
couple
of
metrics
first,
but
I
would
like
to
be
able
to
over
time,
get
a
more
a
more
sophisticated
understanding
of
kind
of
what
progress
actually
means,
and
maybe
this
means
digging
into
type
like
connectivity
types.
Maybe
this
means
adding
more
telemetry
in
different
places.
D
C
D
A
D
A
D
D
A
D
A
A
A
A
Apart
from
that,
there
is
a
new
configuration
object,
called
public
gateways
which
enable
people
to
when
this
PR
is
merged.
It
will
enable
people
to
define
custom
behavior
for
a
specific
host
name.
So,
for
example,
it's
possible
to
expose
only
specific
paths
on
a
specific
host
name,
if
your
only
ones
ipfs,
but
on
APNs
or
vice
versa.
You
are
able
to
do
that
with
this
notation,
you
are
now
able
to
enable
subdomain
gateway
on
any
hostname.
Here
we
use
our
subdomain
gateway
as
an
example,
the
web
link
is
a
subdomain
gateway.
A
We
have
shared
this
in
in
subdomain
for
a
TFS
and
a
penis.
This
is
how
you
would
create
your
own,
and
still,
you
may
want
to
run
the
old-school
path
gateway.
It's
just
a
matter
of
setting
q
sub
domains
to
false
humanik.
My
expose
api
on
that.
If
you
expose
api
on
sub
domain,
it
will
get
a
URL
on
api
sub
domain,
but
the
api
path
will
be
the
same
and
you
can
control
if
dns
link
will
be
resolved
for
the
hostname,
so
you
may
use
DNS
link
as
a
landing
page
for
your
subdomain
gateway.
A
When
someone
opens
the
domain
without
any
salt
like
cid
in
subdomain,
or
you
may
decide
that
you
don't
want
to
run
any
public
gateway,
and
you
just
want
your
own
website
and
you
want
to
just
to
just
go
out
your
fast
for
publishing.
So
I've
added
some
Gateway
recipe
receipts
at
the
end
for
those
most
common
cases,
so
subdomain
gateway
one-line
to
set
it
up
pop
gateway.
A
Ensuring
no
one
can
force
your
note
to
fetch
someone
else's
content.
So
there's
a
no
fetch
option
here.
We
disable
the
NS
link
resolve.
So
no
one
can
point
their
hostname
at
your
IP
at
your
server
to
abuse
your
bandwidth
but
say
we
use.
You
want
to
host
a
specific
website.
Let's
say
you
have
a
bandwidth,
but
you
want
to
contribute
that
bandwidth
only
to
Wikipedia
mirror.
So
what
you
do
you
would
like
set
up
your
DNS
link
pointing
at
your
server.
A
A
A
Long
story
short.
We
want
to
make
this
like
transition.
We
want
to
make
it
easier
for
people
to
render
on
sub,
maybe
just
like
just
as
a
quick
reminder.
We
got
this
public
gateway
checker
and
you
can
see
like
community
contributed
gateways,
and
there
is
an
origin
test
here.
So
right
now
we
got
a
web
link
CloudFlare
and
some
third
party
one
providing
the
subdomain
gateway
feature
which
provides
origin
isolation.
A
D
G
A
D
B
A
That's
more
sort
of
like
an
open
question.
We
discuss
it
and
the
decision
at
the
time
was
to
don't
change
the
behavior
on
ipfs
IO,
because
a
lot
of
people
use
ipfs
IO
for
scripting
using
Karl
W
get
and
if
we
return
the
redirect
a
lot
of
tools
do
not
expect
redirect
on
the
web
browser
we'll
just
follow
the
redirect
return.
The
final
payload,
however,
if
you
run
Karl
without
L
parameter,
it
will
just
return
redirect
without
following
it,
and
it
will
like
break
a
lot
of
workflows.
A
A
B
E
Hello,
so
I
already
talked
a
little
bit
about
this
with
a
Moscow
just
to
money.
I
want
to
get
some
ideas
from
you
guys
to
start
like
designing
test
plans
or
type
of
testing
you
and
I
run.
How
will
the
set
up
to
look
like
and
stuff
like
that?
I'm
starting
to
have
conversations
with
rule
and
all
the
test
run
guys
and
probably
Friday
Monday
around
around
that
I'm
gonna
at
like
this
designing?
E
Well,
the
test
plan
would
look
like
I
wanna,
I
wanna
I
wanted
to
get
like
some
basic
test
plan,
then
some
ideas
for
further
steps.
So,
if
you
guys
want
to
give
me
some
ideas,
I'm
open
to
it
I
would
love
to
hear
from
you
you
to
better
design
those
tests.
So
yeah
try
any
ideas.
Please
send
those
to
me
or
if
you
want
to
have
a
call
and
discuss
it
a
lot
of
weight.
It's
fine,
too.
A
We
are
short
on
time,
so
I
say
we
send
those
scenario:
testing
scenarios
to
you
for
sure.
We
want
to
be
a
part
of
that
test,
ground,
endeavor
and
the
browser
stuff
going
forward
when
we
got
things
like
disability
or
even
like
upgrading
to
WebRTC
from
other
more
centralized
things
that
would
be
super
useful
to
get
and
inside
to
test
ground
what's
happening.