►
From YouTube: IPFS Docs & Developer UX Weekly Sync 2019-07-08
Description
Weekly Meeting Notes: https://docs.google.com/document/d/1EOD-pJi4GvRmGi9HHocgVV8uVHMFIZlyVgJDkvC3DQ4/edit
A
One
thing
I
know
that
my
my
plan
is
actually
to
and
I
wanted
to
double-check.
This
with
everybody
is
repurposed
to
read
me
of
the
main
docks
repo
since
that
that
doesn't
that's
an
existing
place.
B
B
The
core
reason
that
this
is
so
important
to
us
and
to
the
project
this
quarter
and
why
we,
we
think
it
deserves
to
have
a
task
force
around
it,
which
is
really
the
understandability,
the
the
ability
for
people
to
come
to
the
ipfs
project
or
be
effectively
using
the
ipfs
project
and
and
have
a
good
experience
doing
so,
and
that
really
starts
from
our
reliability
and
our
our
trust
in
setting
good
expectations
for
what
they
can
and
can't
do
with
the
protocol
in
its
current
state.
What
works?
What's
a
work
in
progress?
B
What's
still
still
a
dream
of
the
future
and
that
we're
we're
aligning
those
expectations
and
and
holding
up
our
side
of
being
a
good
partner
in
let
helping
them
achieve
their
goals
with
with
the
protocol
and
so
to
me.
Doc
is
a
really
core
part
of
that.
If
we
are
doing
a
bad
job,
communicating
that
then
we're
really
failing
MEC
urban
users,
but
it
the
goal.
The
goal
is
not
the
documentation
itself.
The
goal
is
not
hey,
our
Doc's
should
be
X.
B
It's
people
have
a
great
experience
who
are
trying
to
use
ipfs
today
getting
the
information
they
need
to
do
so
effectively,
and
so
it
can
I.
Think
prioritization
is
great,
saying:
hey
we're
gonna
focus
initially
on
this
chunk,
and
even
just
like
this
subset
of
documentation,
because,
based
on
our
information,
we
have
based
on
all
of
the
feedback.
People
gave
us
for
my
PMS
camp.
B
I,
guess
maybe,
which
is
you
know,
CVT
whether
you
consider
that
documentation
or
not,
but
like
it's
a
it's
a
place
where
we
tell
people
how
they
can
make
use
of
ipfs
and
if
it's
not
accurate,
then
we're
doing
them
a
disservice,
and
so
I
just
want
to
make
sure
that,
like
we're
role
lined
in
like
that's
the
goal,
the
goal
is
to
help
people
use
ipfs
really
effectively
and
that
we
need
to
kind
of
measure
ourselves
on
on
that
capability.
I.
C
Personally,
do
see
the
website
as
part
of
that
effort
and
you'll
see
at
least
one
of
our
items
on
the
ok
arts
is
to
kind
of
bring
like
sort
of
bring
use
cases
into
the
spotlight
on
the
website
and
use
them
as
a
metric
collection
tool
and
one
of
the
things
that
I'd
really
like
to
see.
I
know
I
mentioned
the
second
week:
is
that
we're
not
just
addressing
how
that
we're?
C
Also
addressing
why
and
obviously
that's
it's
more
narrative
and
it's
more
stories
and
I
I
know
it
doesn't
fit
as
directly
for
some
people
and
maybe
it's
more
useful
for
beginners
and
people
who
really
feel
like
they
already
know.
Why
they're
here,
but
I
think
do
you.
Web
in
general
is
not
well
understood
and
the
more
we
can
do
to
get
people
from
normal
web
2d
web
to
Y
ipfs.
It's
helpful,
but
obviously
that's
not
the
immediate
first
thing
that
you'll
see.
D
Would
you
like
me
to
yeah,
let's
say
I'm
gonna
grab
these
two.
So
here
are
the
two.
A
So,
basically,
what
we
did
just
as
a
recap:
this
is
the
starting
document
that
the
IP
address
q3
team
organization,
which
was
the
ending
document
that
we
worked
with
in
Barcelona.
So
earlier
today,
we
iterated
on
that
a
little
bit
and
you
can
see
the
second
link
that
I
just
sent
over
is
the
okiya
okay,
our
spreadsheet,
so
as
I
understand
that
the
aim
of
this
meeting
is
to
solidify
these.
Okay
are
so
let's
go
through
these
point
by
point.
D
C
A
All
right
so
moving
on
from
from
these
onto
the
first
item
on
this,
these
consolidates
in
three
categories.
The
first
is
really
an
evaluation
of
both
the
existing
content
materials,
as
well
as
the
presentation
structure
that
we're
using
to
present
my
classes,
all
right
now,
as
well
as
to
make
decisions
for
how
we
need
to
change
and
improve
being
that
this
is,
for
the
mostly
mean,
obviously,
there's
a
tactical
component,
but
being
in
mind
that
this
is
really
a
strategic
exercise
that
we
went
to
last
this
many
many
years.
A
We
chose
to
focus
on
making
good
architectural
decisions
based
on
what
we've
done
so
far,
what
we've
got
and
best
steps
for
moving
forward
and
to
make
sure
that
we
do
those
with
a
great
degree
of
diligence
before
we
just
start
making
stuff
that
may
need
to
be
ripped
out
or
redone
in
the
future.
So
that
in
mind,
there's
three
things
in
this
first
goal.
The
first
is
to
complete
a
continent
audit
of
ipfs
I/o,
using
the
existing
the
existing
structure
for
content
on
it
that
we
put
into
place
at
Porsche
worked
on
that.
A
E
Not
really,
that
being
said,
what
I
could
do,
but
I
need
to.
If
you
give
me
a
minute
to
look
for
it,
I
can
show
you
some
other
resources
I'm
looking
at
that
will
inform
what
this
deep
dive
evaluation
would
look
like
for
the
content,
audit
and
the
actual
cluster
audit.
E
So
you
can
see
what
the
framework
looks
like,
and
you
can
also
see
like
what
kind
of
information
is
informing.
What
this
evaluation
ipfs
tyoma
was
like
cool.
A
B
Right
with
you
great
great,
come
on,
so
this
would
mean
in
three
months
three
months
from
now,
we
would
have
kind
of
like
a
set
of
screens
of
the
website
and
kind
of
the
the
value.
Add
that
they're
having
and
and
what
we'll
use
their
brains.
We
did
effectually
have
like
a
map
of
the
website
and
all
of
the
content
that
lives
within
it,
and
maybe
some
amount
of
information
or
feedback
loop
on
whether
or
not
that
content
is
out
of
date
and
needs
help.
Is
there
any
will
happen.
B
From
like,
you
know,
three
months
from
now,
if,
by
the
end
of
September,
that
we
we
kind
of,
we
have
that
high
level
picture,
but
we
haven't
done
anything
about
it.
That
seems
like
we're,
we're
you
know,
taking
taking
a
slow
and
long-term
approach
to
something
that
is
also
a
short-term
problem.
A
lot.
A
All
right
simply
by
doing
an
inventory
of
of
actually
getting
all
that
information
in
one
place,
a
lot
of
things
do
self
emerge.
We
do
need
to
do
that
audit
work
in
order
to
determine
our
next
steps.
A
However,
that
said
you
know
as
though
it's
not
just
here's
all
the
stuff,
we
have
it's
here's
how
we
get
two
items:
here's
now
we
get
out
of
these
items
here
are
where
these
items
are
linked
from
and
other
places
in
our
ecosystem.
So
I
think
you
what
portion
is
putting
together
is
something
that
enables
you
to
mobilize
pretty
quickly.
It's
not
that
we
need
to
go
sit
on
this,
for
you
know
another
month
to
do
analysis.
F
Yeah
one
thing
to
add
to
that
I
would
say
is
that
we
gotta
be
careful
about
it's
a
combination
of
wayfinding
so
far,
just
understanding
where
we
are
at
this
point
stay
at
point
time
and
see
if
we
can
direct
people
to
those
appropriate
bits
of
information
quicker
and
that'll.
Give
us
the
short
term
goal.
Long
term
is
actually
understanding
the
audience
or
what
they
want
to
achieve,
and
then
building
some
kind
of
product
or
platform
that
will
basically
facilitate
that,
and
that
is
a
gonna,
be
ongoing.
F
So
I
think
that's
like
by
the
end
of
the
three
months
we'd
like
to
have
that
sort
of
broader
vision,
scope
in
place
and
understand
on
the
landscape.
What
people
are
using
currently
to
do
that
effectively
and
and
what
development
effort
is
required
to
do
that
as
well
for
us
and
whether
we
want
to
commit
to
that
much.
F
B
My
question
is
more
like:
can
we
find
ways
to
front
some
scrappy
evaluation
works
such
that
we
can
also
make
tactical
improvements
that
we
can
measure
throughout
the
quarter
instead
of
starting
improvement
in
q4,
which
just
seems
like
a
long
alone
wait
in
order
to
start
making
progress,
I
want
to
see
more
of
a
hybrid
approach
where
we're
actually
actively
making
some
improvement.
As
we
go
along
that
we
can,
we
can
measure
ourselves
on
well.
A
That
does
tie
itself
into
the
third
of
the
primary
goals
under
that
spreadsheet
and
the
third
one
is
to
improve
two-degree
existing
content,
but
also
to
do
that
with
an
emphasis
on
collecting
metrics
that
are
going
to
enable
us
alongside
you
know
what
we're
doing
and
in
sections
wanted
to
to
be
able
to
make
sure
that
we're
focusing
our
efforts
on
things
that
are
that
are
useful,
that
we
know
that
are
useful.
That
aren't
spinning
our
wheels.
A
Carry
on
the
number
of
improvements
that
you
know
very
tactical
improvements
of
protocol
that,
in
doing
so,
are
going
to
collect
metrics
about
how
people
are
using
the
learning
resources
that
they've
got,
which
are
going
to
enable
us
to
make
good
decisions
in
conjunction
with
this
content
audit
and
in
conjunction
with
hiring
somebody
to
actually
be
producing
the
content.
Now
all
of
these
things
do
converge
and
able
to
in
order
to
enable
us
to
move
quite
quickly.
A
So
that
said,
yeah
that's
one
of
the
reasons
why
we're
taking
this
approach
so
that
you
know
we're
not
one
thing
that
worries
me
is
I,
don't
want
to
be
creating
content
or
amending
content
that
turns
out
not
to
be
what
we
eat
and
we
haven't
really
necessarily
been
able
to
do
a
terrifically
good
evaluation
of
where
our
content
is
failing
us
at
this
point,
so
working
with
us
on
what
flawed
knowledge
to
be
able
to
create
a
whole
bunch
of
new
content,
especially
when
we
don't
have
this
acne
to
create
the
content
itself
is
a
little
bit
of
a
tricky
thing.
A
You
know,
but
we
are
trying
very
wholeheartedly
to
try
to
make
this
something
that
you
know
once
once.
As
the
pieces
come
into
place,
we
are
able
to
move
at
a
higher
speed
so
that
that
phase
in
the
rows,
four
and
five,
or
rather
five
and
six
on
this
open-air
spreadsheet,
five
and
six
are
sort
of
working
in
they
are
not
dependent
on
each
other,
but
they
do
work
very
closely
together.
A
One
ro5
is
to
work
as
a
team
as
a
whole
to
create
a
prioritized
features
list
for
what
we
want
our
documentation
platform
to
be
that
is
based
on
our
own
personal
wish
list.
That's
based
on
research
and
competitive
spirit
and
now
I
guess
documentation
sites
that
also
works
in
conjunction
with
something
that's
a
bit
more
of
chris's
task,
which
is
to
determine
a
best
possible
tech
stack
or
framework
for
the
ipfs
oxide,
which
will
then
be
used
to
inform
future
dioxide
is.
A
Instead
we
want
to
make
sure
that
we're
not
creating
a
lot
of
our
own
bespoke
stuff
when
already
existing
things
are
out
there
that
work
just
as
well,
if
not
better,
we
want
to
make
sure
that
it's
something
that
is
easier
for
a
broader
number
of
people
to
contribute
to
an
edit,
which
is
one
of
our
major
failings
right
now.
So
this
is
those
three
things
together
sort
of
illustrate
the
strategic
component.
Yes,
that
is
strategy
on
we're
getting
closer
to
tactical
improvements.
As
you
get
on
that
list,
yeah.
B
A
So
we're
using
we're
using
a
strongman
question
for
that
that
enables
us
both
to
test
some
of
the
things
that
you
just
mentioned
and
then
also
to
get
a
better
grip
on
the
goal
goal
based
persona.
So
what
we're
intending
to
do
is
put
a
question
up
front,
Center
and
I.
Think
that's
at
I/o
that
says:
what
do
you
want
to
do
with
IPF
SEO,
I,
I,
work
with
large,
datasets
or
I
am
concerned
about
data
provenance
or
so
on
so
forth?
A
It's
it's
the
very
beginning
of
what
would
eventually
be
like
a
very
large
user,
an
adventure
book
style
way
of
getting
into
the
documentation,
starting
with
your
initial
goal
in
mind
functionally
what
these
folks
are
gonna
get.
Is
you
know
a
link
to
you
know:
they're,
not
gonna
at
this
at
this
stage
in
the
game,
get
personalized
instructions
for
how
to
get
started
without
doing
those
for
large
datasets,
because
that's
much
more
towards
our
end
goal.
What
this
does
do
is
enable
us
to
fly
test.
A
B
Think
I'd
like
to
see
us
use
this
approach.
More
often,
I
think
it
sounds
great
in
that
hey
we
have
tons
of
people
who
are
coming
to
us.
We
have
you
know
four
hundred
monthly
contributors
on
github
and
I,
DFS
and
I'm
sure
a
lot
of
those
people
are
touching
our
dock
site
at
some
point
and
if
they
could
point
us
to
the
things
that
are
wrong
with
it
or
point
us
to
the
things
that
have
not
been
useful
on
it
like
that.
A
Terry
I
don't
want
to
steal
your
thunder,
but
I
know
that
some
of
your
work
on
protocol
is
devoted
toward
implementing
some
of
these
inline
metrics
like
that.
Is
this
useful
sort
of
button
type
of
pressure?
You
had
a
couple
of
things
on
your
list
and
we're
also
intending
to
use
evidence
of
how
how
that
is,
received
and
protocol
to
help
inform
wrong
number,
five
and
and
the
sort
of
feature
set
that
we
would
want
on
on
the
main
ipfs
talks
in
terms
of
feedback
mechanisms.
That's
that's
part
of
that
features.
Inventory.
B
A
I
mean
and,
and
it
you
know,
it
may
be
that,
based
on
the
decisions
that
we
make
as
part
of
these,
you
know
we
may
need
to
the
cleanest
way
to
do
that
is
incorporated
into
whatever
next-gen
dioxide.
It
is
if
the
next-gen
dioxide
doesn't
enable
us
to
get
feedback
quick
enough.
We
may
need
to
jerry-rig
something
in
the
existing
dockside
I
mean
that's,
that's
really!
That's
that
sort
of
in-flight
stuff
that
we're
gonna
have
to
figure
out
as
we
go,
but
you
know
really.
A
Bro
five
is
about
here
are
all
of
the
features
that
we
want
in
in
the
most
beautiful,
fantastic,
perfect
dock
site
in
the
world.
What
gets
implemented
in
what
order?
And
what's
on
on
the
new
platform
versus
the
existing
platform?
You
bet
that
is
going
to
have
to
be
open
to
some
tactical
discussion
as
we
get
further
down
that
path.
There's.
B
Guess
the
thing
I
was
trying
to
point
out
which
rereading
rule
5,
it
doesn't
seem
to
quite
get
out
which
is
like
not
just
from
a
features
perspective,
but
from
a
Content
LOI
child's
head
to
side
and
hold
for
five
to
ten
seconds
from
a
Content
perspective.
Having
kind
of
a
prioritized
list
through
feedback
loops
with
the
community.
B
B
Think
that
could
be
doing
doing
an
audit
of
that
and
like
including
that
in
a
Content
audit
of
like,
where
are
people's
current
problems
with
our
documentation,
I
think
that
would
make
a
lot
of
sense.
You
could
also
use
feedback
on
the
doc
site
itself
to
understand
where
people
are
having
with
it,
but
I
think
getting
that
feedback
loop
on
the
content,
not
just
on
the
features
needed
of
a
new
content
platform
is
something
that
can
is
really
useful
and
useful
to
start
tracking
from
the
metrics
perspective
as
well.
Right.
C
C
C
A
A
What
that
does
mean
is
that
we
may
end
up
you're,
realistically
speaking,
me
to
prioritize
things
around
the
features
wish
list
in
order
to
get
more
detailed
information
about
the
things
that
we
suspect
as
a
result
of
the
information
on
it
yeah,
we
may
be
looking
through
the
content
on
it
and
look
at
things
and
find
out
that
we
may
we
may
find
some
glaring
oversight
in
a
certain
amount
of
content.
A
We
might
find
that
you
know
sort
of
mid-range
onboarding
as
a
result
of
the
audit
starting
to
feel
really
bad,
and
that
is
going
to
reveal
that
we
may
need
to
be
more
aggressive
about
how
we
ask
people
stuff
and
that
that
you
know,
is
it
necessarily
gonna,
be
like
oh
well,
let's
just
wait
till
we
have
a
new
shiny
done.
You
know,
that's
that's
gonna
be
yeah.
A
We
want
to
stop
and
and
dig
into
this
a
little
bit
deeper,
either
through
adding
some
metrics
to
the
existing
or
questionnaire
mechanisms,
or
something
for
the
existing
documentation
and
or
some
other
means,
but
I
think
one
thing
we
want
to
make
sure
is
that
we
is
that
we
baked
some
leeway
into
this
quarter
in
order
to
act
on
those
things,
as
they
reveal
themselves.
Yeah.
B
Even
just
having
a
line
item
for
like
we
identify
and
resolve
like
3p0
documentation,
issues,
or
something
like
that
would
make
me
feel
more
confident
that
we're
prioritizing
a
set
of
our
time
to
be
resolving
open,
painful
issues
for
our
users,
and
my
worry
is
that
will
will
in
an
attempt
to
do
a
really
holistic.
Long-Term
process
will
not
deliver
any
value
for
a
long
time
to
our
users
and
so
we'll
kind
of
continue
and
perpetuate
this
experience
that
hey
our
documentation
site
sucks,
because
we
haven't
actually
improved
it.
B
We've
been
analyzing
it
and,
and
I'd
like
to
see
us
like
just
just
for
like
the
most
p0
of
those
things
like
actually
go
in
and
make
hah
sixes
or,
at
the
very
least
just
be
like
this
is
outdated
or
something
so
as
we're
doing
this
content
analysis
so
that
people
can
start
relying
on
our
documentation
more
or
at
least
understand
what
they
can't
rely
on
in
the
short
term.
In
addition
to
having
the
long
term
be
more
accurate.