►
From YouTube: Recovery from HTTP/DNS failures, webui, geoip, tests - IPFS GUI and Browsers Weekly, 2019-10-23
Description
About IPFS GUI and Browsers Weekly: https://github.com/ipfs/team-mgmt/issues/790/ipfs/bafybeieikh3nzcsujcf32q6nx3j7q6y4lqzfshtahtslcttson5ncyyh7i/
A
So
in
the
past
weeks
we
had
a
pretty
nice
contribution
from
outside
developer,
who
added
a
way
to
recover
from
flack.
Basically,
he
created
a
framework
for
recovering
from
failed
HTTP
requests
and
I
sort
of
in
the
process
of
reviewing
that
I
figured
out.
It's
like
a
wider
set
of
challenges,
both
technical
and
UX
ones.
So
I
created
a
project
for
tracking
things
related
to
this,
so
I
call
this
resilience
and
offline
and
more
or
less.
A
This
is
work
related
to
recovering
from
either
technical
failures
or
things
like
DNS
censorship
blocking
or
like,
as
well
as
overlap
with
offline
use
cases.
Often
those
things
are
interconnected,
so
you
can
see
some
stuff
already
landed
in
in
the
master.
We
did
not
I,
don't
think.
We've
released
that
to
the
stable
channel,
but
it
will
eventually
bubble
up
for
now.
We
got
this
means
of
recovering
from
failed
HTTP
requests,
so
maybe
I
show
that
it
may
be
easier
to
see
them
to
describe
so.
A
I
have
a
browser
with
activist
companion
installed
and
right
now
we
can
see,
read
to
local
gateway
is
enabled
and
if
I
try
to
open
a
link
to
a
dead
gateway,
this
domain
does
not
exist,
so
it
will
return
the
NS
failure.
However,
a
people's
companion
automatically
detects
this
I
give
his
path
and
when
I
open
it,
it
redirected
the
request
to
local
gateway,
so
that
was
already
there.
However,
what
we
did
right
now
is
when
you
disable
this,
redirect
to
local
gateway,
and
you
try
to
open
this
dead
link.
A
Previously,
it
was
would
just
fail
and
you
will
be
not
able
to
open
this
content.
However,
we
have
public
gateways
and
even
if
you
don't
run
local
gateway,
ipfs
companions
should
be
able
to
help
you
to
get
to
the
content
you
want.
So
what
happens
now
is
if
you
open
this
dead
link,
you
get
recovered
to
a
public
gateway.
A
So
that's
a
pretty
good
way
for
healing
link,
rot
related
to
public
gateways
that
go
away
either
by
it
could
be
the
contributor
simply
stopped
paying
for
a
domain
name,
or
maybe
the
person
just
stopped,
maybe
living
that's
a
bit
morbid,
but
it
happens
so
that
way.
Links
at
with
content
addressed
paths
still
survive.
So
that's
for
path
gateways.
A
We
have
full
request,
contributed
by
Colleen
as
well
to
do
the
same
for
some
domain
gateways
to
have
this
like,
basically,
a
feature
parity
for
all
types
of
gateways.
So
if
there's
a
subdomain
gateway
provided
by
Councillor
and
everyone
is
using
cloud
slur,
but
at
some
point
cluttering
decides,
we
don't
want
this
specific
type
of
content
on
our
gateway
and
they
start
returning.
Http
error
codes
or
simply
job
connections.
A
You'd
get
the
recovery
looks
the
way
the
recovery
happens.
We
just
open
the
same
resource
on
a
public
gateway
in
a
separate
tab.
However,
so
it
may
be
confusing
to
the
user
I
plan
to
create
issue
about
this.
To
do
maybe
the
first
time
we
do
recovery
for
a
specific
that
Gateway,
we
probably
could
display
this
landing
page,
similar
to
what
you
block
is
doing
when
you
have
a
link
that
has
a
lot
of
tracking
embedded
in
like
query
parameters.
A
What
you
block
does
it
gives
you
an
option
to
gives
you
a
context?
What
is
happening
and
do
you
want
to
like
go
to
that
URL
permanently
or
just
this
time?
We
probably
could
do
the
same
thing,
use
that
as
a
means
of
educating
users
so
tell
like.
Oh
this
gateway,
you
are
trying
to
oh,
like
this
resource
you're,
trying
to
open
from
a
public
gateway
we
are
in.
A
Your
browser
is
not
able
to
load
it
because
either
server
is
dead
or
DNS,
query
felt
or
something,
and
there
should
be
an
option
to
yes,
open,
but
just
this
time
or
like
yeah
always
fix
dead
links
from
this
gateway.
Something
like
that
I
feel
that
would
be
much
both
like
reduces
a
surprise
that
the
URL
changed
and
also
like
give
a
means
of
educating
users
about
the
value
I
gave
his
companion
and
generally
ipfs
can
bring
into
like
healing
those
broken
links.
So
that's
more
or
less
the
resilience
and
offline
project.
D
E
D
D
D
A
I
believe
I
believe,
like
for
web
view,
the
current
iteration
of
web
UI
we
don't
want
to
provide
like
like
the
city,
is
the
the
granulation
level
we
want
to
focus
on,
and
probably
you
know
if
the
old
library
does
not
support
that
we
could
yeah.
So
that
does
the
thing
we
don't
want
to
like
duplicate
data
sets,
but
if
we
we
are
able
to
like
optimize
both
the
size
and
lookup
times
for
this
specific
use
case
of
web
UI,
we
may
be.
A
D
Another
thing
is
like
the:
what
we
have
now
only
has
like
nine
fields,
country
code
country
named
region
code
and
so
on,
but
the
new
dataset
as
a
lot
of
things
as
much
more
than
what
we
have
now.
You
know
it
has
time
zones
if
it's
in
European
Union,
but
the
name
of
the
continent,
the
core
of
the
continent.
The
country
is
a
subdivision
one.
Oh
man.
A
A
Know
daddy,
oh
yeah
III
see
that
it's
like
much
wider
discussion,
I
think,
let's
just
create
an
issue
in
the
ipfs,
get
your
IP
how
to
tackle
these
new
formats.
If
you
can
like
specify
what
was
in
the
old
one
and
what's
in
the
new
one
with
him,
yeah
I
just
was
under
the
impression
they
basically
like
just
shuffled
some
stuff
around
in
their
website,
but
this
stuff
usually
kept
was
kept
the
same,
but
it
seems
like
they
revamped
entire
thing.
Yeah.
D
B
B
D
I'm
wondering
if
stairs
page
is
consuming
too
much
CPU
also
I
told
Holly,
and
it
told
me
we
we
use
the
app
I'm.
The
idle
bundle
has
way
to
refetch
stuff
after
the
number
of
seconds,
and
that's
not,
but
the
bundle
is
four
and
they
told
me
probably
that
I
should
take
a
look
at
it
to
see
what
it
is
actually
doing.
D
A
We
are
making
significant
performance
improvements
for
for
the
Pierce
page,
which
gets
pretty
hairy
if
you
have
like
thousands
of
peers
and
I'm,
like
specifically,
first
in
my
local
node
to
head,
like
4k
of
peers,
just
to
see
how
it
impacts
performance,
so
yeah
I
believe,
like
the
CPU
consumption
of
web
UI
that
was
reported
in
original
issue
was
mostly
for
the
Quai
view
for
ipfs
desktop
use
case,
and
that's
when,
like
people
keep
running
Status
page
in
the
background
stuff
or
something
like
that,
so
I
think
that's.
That
one
is
tackled.
A
In
there
like
in
the
long
run,
the
problem
with
the
Status
page
is
that
we
are
constantly
polling,
a
belief
that
bandwidth
starts
API
and
we
are
polling
that,
even
if
we
are
not
on
the
Status
page.
So
that's
like
this
graph
of
bandwidth
over
time
as
they
has
actually
historical
data.
Even
if
you
like,
switch
the
different
tab
which
is
sort
of
like
suboptimal,
but
we
we
would
need
more.
A
We
probably
would
need
more
advanced
stats,
API,
which
has
a
knowledge
of
historical
data,
and
but
that
requires
changes
to
gotta.
Go
ipfs,
intercept
you
first,
so
that's
probably
something
that
may
happen
if
we
talk
with
talk
about
API,
v1
or
something
like
that
and
start
gathering
those
needs
of
API
so
that
are
missing
or
limitations
of
existing
kpi's.
But
given
that
guys,
we
have
right
now,
I
don't
see.
We
can
like
improve
much
that
request
wise.
We
could
like
we
remove
the
overhead
of
redrawing
canvas.
A
D
A
Yes,
so
if
someone
is
watching
this
in
the
background,
is
that
the
girl
IP
database
we
have
in
web
UI
is
bit
out
of
date
and
some
IP
is
naturally
get
sold
and
purchased
on
moved
and
reassigned
and
I
had
like
notes
from
the
United
States.
That
I
had
super
fast
connection.
That
was
faster
than
la
than
light,
because
that
IP
got
reassigned
to
Germany.
But
where
do
I
was
still
showing
that
us
and
I
had
like
from
Poland
to
us,
20
milliseconds,
which
is
like
not
possible.
D
A
A
So
that's
probably
something
we
need
to
figure
it
out.
First,
do
we
want
to
create
this
test
infrastructure?
Oh
and
another
thing
is
like
those
tests
when
run
from
J's
ipfs.
If
you
want
to
run
web
UI
test
suit
from
just
ipfs,
edgier
takes
care
of
that.
There's
like
a
juror
test
external
command
and
that
command
basically
checks
out
the
repository
installs
dependencies
and
then
runs
I
believe
either
build
or
or
just
as
NPM
run
tests.
A
So
the
thing
is,
we
need
to
figured
out
a
way
both
to
ensure
our
test
command,
which
would
be
run
by
a
juror.
That's
what
we
want,
which
is
basically
like
running
all
the
tests
and
another
thing
is
make
make
sure
those
there's
a
way
for
running
those
tests
again
against
different
runtimes
right
now.
I
think
the
edge
your
test
external
simply
runs
those
tests,
and
that's
it.
A
What's
already
there
and
what's
missing
because
correct
me,
if
I
got
this
right
in
web
UI,
we
simply
decided
when
we
started
do
I.
That's
like
edge
here
at
the
time
was
mostly
used
for
libraries
and
that's
why
we
did
not
pick
it
because
there
was
like
not
not
much
value
added
at
at
that
time
by
edger
for
like
end-user
application,
and
we
are
simply
running
like
just
test
and
I
think
we
have
a
puppet
here
and
like
simple
puppeteer
and
to
end
test
which
is
sort
of
like
hard-coded
to
Chrome.
A
B
A
Yeah,
that's
the
point
like
Adyar
solved
a
lot
of
this
stuff
already.
So
that's
why
I'm,
like
the
first
thing
I
want
to
see
is
I'll
be
able
to
simply
like
either
migrate
or
set
up
like
neck
next
to
existing
tests
set
up
a
zero-based
once
mostly
because,
like
edgy,
takes
care
of
all
this
like
orchestration
for
running
stuff
against
different,
like
chromium
and
Firefox,
totally
I
believe
they
are
either
using
like
Firefox,
headless
or
or
maybe
this
like
Firefox,
but
the
tea
or
something
internally.
Hugo
probably
knows
that
knows
details,
but
that's.
B
A
Yeah,
like
you,
are
able
to
like
ingest
ipfs,
you
are
able
to
run
a
specific
test
in
a
specific
runtime,
so
you
can
grab
just
this
test
and
I
want
to
run
it
in
web
browser
and
then
I
want
to
check
if
it
also
works
in
electron
and
and
we
got
and
that's
just
like
one
command
integer,
so
I,
just
the
that's.
Why
I?
A
The
first
thing
I
would
like
to
do
is
to
see
if
we
are
able
to
reuse
that,
because
if
we
want
like,
if
we
start
writing
custom
orchestration
for
like
puppeteer
things
for
like
separately
for
Firefox
separately
for
chromium,
then
we
ran
into
the
the
issue
that
ACOG
described
like
I
profess
desktop.
Also
it's
not
using
edger.
So
we
would
have
to
like
perhaps
setup
something
similar
at
some
point
there.
So
maybe
looking
if
we
are
able
to
switch
to
edger,
maybe
not
for
build,
but
for
tests
or
like
a
separate
way
of
running
tests.
A
B
Will
will
that
get
us
into
a
place
where
other
repos
can
run
their
tests
against
things
like
web
UI
under
sub?
So
you
know
it
really,
ultimately
that
that's
all
we
want
right.
We
want
people
to
have
configurable
ipfs,
whether
it's
JCI
professor,
go
and
configure
web
version
against
configurable
web
UI,
I,
guess,
configurable,
companion
or
a
desktop
right.
B
B
A
Yeah
I
believe,
like
we
probably
want
to
replace
that,
like
the
default
test
command
to
run
like
everything
and
then
provide
dogs
for
people
who
want
to
just
run
like
tests
in
a
watch
mode
when
those
tests
like
reload
or
just
run
tests
against
the
specific
browsers,
but
by
default
we
probably
won't
have
this
like
one
command
to
transfer
it
sequentially
yeah.
So
I,
probably
a
comment
on
the
existing
issue
when
I
posted
the
matrix
I
just
like
was
in
the
middle
of
like
looking
at
this
before
this
call.
A
A
A
A
I
think
like
if,
when
we
meet
next
time,
would
probably
have
a
better
understanding
if
we
are
like
going
with
Azure
and
if
so,
how
much
work?
What
needs
to
happen.
I
probably
will
ask
hack
on
what's
actually
happening
right
now
in
end-to-end
test,
because
I
just
look
just
briefly
look
at
this
did
not
look
at
it
before
so
it's
new
for
me,
but
yeah
I,
think
we
can
do
this
and
I
would
spend
some
time
on
trying
to
reuse
edger,
because
it's
probably
the
like
the
cheapest
and
the
best
option
right
now,
I.
B
E
A
A
A
B
A
A
A
However,
before
we
go
there,
we
need
to
first
of
all
streamline
and
polish
the
process
of
generating
snapshots
and
the
test
case,
which
I
suggest
to
start
with
is
smaller
Wikipedia.
We
already
have
a
mirror
of
Turkish
Wikipedia's.
That's
why
I
created
tissue
for
it.
It's
smaller
than
the
name
English
one
English
one
is
I
believe
six
hundred
fifty
gigabytes.
This
one
is
around
forty,
maybe
so
it's
much
much
smaller
and
you
can
effectively
build
it
in
within
one
day
on
regularly
like
laptop.
A
So
the
task
list
is
to
like
create
a
new
snapshot
using
the
instructions
from
like
with
me
see
what
goes
wrong
and
basically
like
right.
Next
experience
report
I
had
some
like
issues
with
the
stras
script.
We
had
and
other
things
and
then
look
at
the
generated
snapshot
and
see
if
it
has
this
canonical
link,
which
was
an
issue
before
we
search
engines
fixed
and
then
we
add
a
footer
at
the
bottom
of
every
page,
describing
what
was
the
source
when
what's
generated
and
sure
this
thing
works.
And
if
those
things
work,
we
would.
A
A
B
There
so
there's
there's,
it
seems
like
there's
two
discrete
parts.
There
first
is
the
snapshot
generation,
and
that
is
not
my
PFS
specific
at
all
and
we
can
have.
We
need
somebody
to
be
able
to
do
some
testing
on
that
generation
process
to
verify
that
the
steps
are
correct
and
the
generated
snapshot
is
correct.
So
that's.
E
B
A
The
problem
is,
we
are
not
using
the
official
xeam
extract
tool.
We
we,
like,
we
created
a
custom
one
which
enables
us
to
unpack
it
and
add
it
to
IDF
s,
and
then
we
do
some
additional
processing.
So
it's
a
challenge
to
streamline
the
process
because,
right
now,
if
you
go
to
the
readme,
there's
a
lot
of
manual
steps
that
need
to
happen
and
before
we
like
invite
third
parties,
we
should
figure
it
out
those
scripts
cause.
Those
scripts
are
too
complex
and
a
lot
of
them
simply
depend
on
stuff.
A
Yep
is
under
snapshots,
I
believe
there
is,
there's
wrapper
script
for
building
snapshots
and
the
meta
issue
about
making
it
more
usable.
Those
are
pretty
old,
but
the
details
are
there
and
some
challenges
run
down.
Probably
I
could
go
over
those
issues
and
like
close
them
and
create
like
a
single
one
as
right.
Now,
this
all
those
problems
scattered
across
multiple
issues
and
okay.
B
I'm
taking
notes
in
the
notes
as
you're
speaking,
all
right,
noting
which
issues
but
yeah
having
a
even
if
it
was
just
a
clear
plan,
an
idea
of
what
needs
to
be
fixed
will
help
other
people
and
I
was
thinking
of
trying
to
organize
just
like
a
Saturday
hackathon,
or
something
like
that
at
some
point
and
get
some
people
together
to
hack
on
some
of
this
and
try
to
get
us
prepped
for
to
being
fun.
We
can
get
stuff
in
January,
I.
B
I
add
as
a
next
issue
to
there
is
a
browser
vendor
that
is
inquiring
about
what
those
patches
look
like
that
Chrome,
or
that
brave
used
to
whitelist
the
Chrome
OS
sockets
api's
in
brave.
Do
you
have
a
link
to
the
crow
or
the
brave
changes
that
they
made
to
be
able
to
whitelist
those
for
us
actually.
A
B
B
A
B
And
this
is
it's
kind
of
this.
Is
we
talked
a
little
bit
about
doing
a
post
browser
update
post,
specifically
about
the
technical
details
of
how
the
brain
changes
were
implemented,
both
the
kind
of
the
high
jinks
that
you
had
to
do
inside
companion
for
things
like
local
gateway
and
stuff?
Like
that
explained
people
to
the
the
lovely
scenario
of
reading
happy
Jas
inside
of
a
Content
page
inside
of
a
browser,
and
also
some
of
these
some
of
these
unique
bits
as
well,
and
so
that
we
can
get
the
of
the
whole
picture.
A
Yeah,
actually,
it
would
make
a
pretty
good
blog
post,
because
it's
more
like
visual
I
believe
it's
I
can
write
a
wall
of
text.
However,
like
I
got
I
can
like
sketch
something
and
I
got
I
can
make
a
pretty
good
visualization
of
all
those
blocks
and
how
what's
inside
words
and
I,
feel
it
visually
will
look
pretty
cool.
You.
A
Yeah
and
just
like
to
close
on
the
like
what
brighten
it
to
whitelist
access
to
our
api's
yeah
I
find
the
PR
is
the
the
code
is
public.
Long
story
short
is
that
they're
they
have
an
internal
list
of
blessed
extension
IDs
and,
if
extension,
any
matches,
because
the
each
extension
is
a
cryptographically
signed.
So
each
extension,
if
your
ID
matches
you
get
access
to
a
specific
specific
api's
and
for
companion
that
means
chrome,
sockets,.
B
Cool
I
want,
you
know
well
I'll,
trace
that
issue.
If
you've
dropped
the
initial
issue,
we
can
spelunk
a
little
bit
and
find
whether
or
not
they
had
to
do
any
underlying
in
the
in
the
guts
of
chrome.
Anything
to
be
able
to
access
those
specific
API
is
exposed
them
to
an
extension
contacts.
Let's
go
looks
like
that.
A
C
A
D
I
know
can
I
just
yeah
like
the
IP
festivals
as
ipfs
as
a
dependency
and
I
wonder
if
we
should
remove
that
in
basically
experience
the
diminished
there's,
no,
no
online
daemon
right
now
and
it's
an
interesting
question
well
to
npx,
install
and
pay
executives.
Kaval
senate
is
downloading.
Yes,
yes,
yeah.
A
So
that's
that's
interesting.
That's
like
a
separate
topic.
Do
do
you
know
if
we
have
like
any
sure
to
discuss
that
because,
like
that
discussion
here
is
we
sort
of
if
we
implement
the
co-hosting
speck
in
artifice
co-host,
it's
basically
like
jeaious
library,
J's
up,
slash
library,
and
then
we
could
reuse
this
co-hosting
library
involved,
companion
and
desktop
that
I
think
they
like.
A
D
B
A
We
probably
could
leverage
leverage
community
help
with
this
like
as
well
make
not
sure
if
Jan
Jan
is
responsible
for
writing
ipfs
with
the
newsletter
and
after
I
believe
she
may
be
either
interested
in
following
those
peers
for
the
newsletter
but
as
well.
Maybe
she
could
help
with
if
she's
reading,
all
those
incoming
new
apps
anyway
gives
access
some
initial
filter,
yeah.
B
I
know
there's
an
open
issue
for
that
in
that
repo
and
no
clear
answer
so
I
think
may
like,
maybe
if
maybe,
if,
maybe,
if
maybe
it
next
week
at
this
meeting,
if
we
dedicate
maybe
15
20
minutes
to
just
saying
okay,
what
if
we
develop?
What
would
those
criteria
be?
How
could
we
write
them
up
in
a
way
that
it's
easy
and
for
anyone
who
walks
up
to
the
repo
to
build
to
implement
fairly
and
then
anybody
else
can
walk
up
and
do
it?
That
would
be
oh
yeah.