►
From YouTube: IPFS Weekly Call 2019-06-10
Description
IPFS Newsletter: https://tinyletter.com/ipfsnewsletter
QRI - building the data bazaar.
A
Hello
and
welcome
to
the
ipfs
weekly
call
when
we
get
to
learn
about
the
amazing
stuff,
that's
being
built
on
top
of
IP
FS.
Today,
we're
going
to
hear
from
Brendan
who
is
CEO
of
Q
star,
that's
being
built
on
top
and
Qri
is
a
peer-to-peer
tool
which
helps
people
deal
and
handle
and
share
data.
So
Brendan
I'm
gonna.
Let
you
take
it
away.
B
Amazing
Thank
You
Portia
thanks
everybody
for
coming
out
to
a
weekly
call
yeah.
So
as
force'
mentioned,
I
work
at
query
and
we
are
trying
to
do
data
science
in
a
new
way.
We
think
that
we
liked
it.
We
like
to
call
the
thing
we're
building
the
data
Bazaar,
which
is
very
similar
to
like
the
data
or
we
use
the
sort
of
software
metaphor
of
like
we
have
the
cathedral
style
of
building
software
and
the
bazaar
so
building
software
versus
you're
like
the
origins
and
open
source
cathedral
style.
B
B
Where
that
conversation
is
meaningful,
so
it's
sort
of
structured
in
a
way
that
everybody
can
sort
of
understand
what
everyone's
talking
about
I
think
if
you've
ever
worked
with
github
or
any
of
this
sort
of
like
open
source
collaboration
tools,
you
have
the
the
feeling
for
what
this
means.
You,
you
sort
of
know
like
I'm,
going
to
create
a
pull
request
and
a
pull
request
is
a
request
for
someone
to
change
the
way
your
code
works
and
that
code
there's
a
process
for
auditing
that
and
then
the.
B
B
B
So
imagine
your
massive
CSV
file
with
lots
and
lots
of
stuff
in
it
and
you're
just
making
small
edits
to
that.
You
need
to
be
able
to
collaborate
on
it,
and
so,
if
we
zoom
out
for
a
second,
we
think
about
ipfs
and
what
ipfs
is
a
lot
of
the
things
that
ipfs
does
are
actually
like
a
perfect
starting
point
for
building
a
dataset
version
control
system,
which
is
what
query
is,
if
you
think
about
an
ipfs
hash,
that
is
a
bunch
of
files
broken
up
into
blocks.
B
We're
specifically
referring
to
unix
FX
v1
hash
is
a
reference
point
for
anybody.
Who's
playing
with
IP
LD
vs.,
not
I,
feel
DeLand,
but
it's
it's
a
file
system
and
under
the
hood
it
all
of
those
files
are
being
broken
up
into
blocks,
and
so
with
query,
we've
designed
something
that
is
intended
to
start
from
the
primitive
of
ipfs,
not
that
I,
because
it's
primitive,
but
using
that
as
a
foundation
and
sort
of
build
upwards
into
a
system
that
allows
you
to
do
convenient
and
structured
and
reasonable
and
rational
versioning.
B
And
so
today,
I'd
like
to
just
sort
of
give
some
of
you
on
the
call
are
already
aware,
with
a
sort
of
what
query
is
and
we've
we've
sort
of
met
and
chatted.
A
bunch,
but
we've
made
some
pretty
major
progress
since
the
last
little,
while
in
last
sort
of
number
of
months,
so
today,
I'd
like
to
just
show
you
some
of
like
what's
been
happening
and
feel
free
to
sort
of
just
like
put
your
hand
up
and
ask
questions
at
any
point.
B
If
you,
if
you're
sort
of
one
stop
me
and
sort
of
see
what's
going
on,
but
we'll
sort
of
give
you
a
high-level
tour
and
it
for
today,
I
thought
it
would
be
fun
to
prep
something
that
is
identified,
that
is
more
relevant
to
the
sort
of
ipfs
community,
and
so
how
many
folks
have
ever
wondered
just
like
how
many
nodes
are
online
and
any
given
point?
Yes,
yes,
yes,
okay,
cool
yeah!
B
So
today
let
me
just
sort
of
start
with
I'm
gonna
share
my
screen,
I'm
just
gonna
share
my
whole
desktop,
and
hopefully
my
desktop
is
not
too
muddy.
All
right.
That's
straight
to
code
land
that
was
smart
cool.
So
here
is
an
answer
to
your
question.
If
we
sort
of
like
zoom
in
on
when
was
this
Monday
June
10th?
So
if
we,
if
we
count
the
unique
number
of
peers
by
our
there,
were
three
thousand
four
hundred
forty
six
and
then
I
have
to
move
this
window.
B
I'm
sorry
and
there
were,
if
we
look
at
it
over
the
over
the
last
day,
there
were
six
thousand
unique
seen
over
the
last
day,
and
so
what
you're
looking
at
is
a
visualization
of
a
query
data
set,
and
so
this
is
sort
of
the
overview
of
it
a
set,
and
we
think
about
this
person
looks
like
a
github
repo,
but
our
notion
of
a
dish
that
is
far
more
granular
than
a
repo
we
have.
You
would
have
many
data
sets
many
more
than
you
would
have
repos.
But
the
thing
that's
really
important
about
query.
B
B
Democrats
aren't
being
nice
to
me
today,
but
actually
one
of
this
is
a
demonstration
of
connecting
to
the
distributed
web
and
at
the
same
time
it
also
serves
up
for
me
a
local
host,
my
dataset.
But
if
we
look
at
this
sort
of
the
thing,
that's
most
important
about
query
is
like
to
try
and
normalize
this
conversation
around
data
and
the
way
we've
done.
This
is
we've
developed
a
there's,
a
document
model
where
everything
is
structured,
the
exact
same
way
when
we're
talking
about
datasets,
the
actual
contents
of
a
dataset
so
like.
B
If
we
think
about
like
the
CSV
file
and
it
dataset,
we
call
that
the
body
and
we're
sort
of
working
very
similar
to
the
way
that
HTML
documents
work.
All
of
our
metadata
is
sort
of
something
called
meta
and
then
meta
and
structure
and
transform
sort
of
collectively
are
referred
to
as
the
head.
But
this
is
the
actual
data.
So,
if
at
any
given
point,
you
can
literally
just
pull
this
hash
off
of
my
PFS
and
we'll
let
this
load
in
the
background.
B
While
we
sort
of
resolve
that
locally
but
anyways,
the
data
itself
is
actually
right
there
and
always
accessible
to
you.
So
at
any
given
point
you
can
go
to
this
hash
last
body,
dot,
CSV,
and
you
will
see
this
data
actually
in
this
case
I
think
it's
a
JSON
data
set
yeah.
This
is
in
JSON,
so
it
would
be
body
dot
JSON,
but
we
sort
of
get
what
we
need
to
sort
of
start.
B
So
this
is
the
initial
one
and
then
sort
of
moving
up
forward
as
we
make
all
of
these
changes
and
every
single
change
is
tracked
and
every
single
change
is
attributed,
and
all
of
this
is
every
single
time
we're
sort
of
writing
this
down
as
an
ipfs
hash
and
moving
that
around
as
we
need
it.
But
we've
also
have
a
chat,
see
one
second
sports,
let's
do
something
live.
Is
there
live
support
for
lighted
sets
as
of
right?
B
Now
we
sort
of
think
of
that
as
a
separate
set
of
concerns,
eventually
we'll
get
into
the
sort
of
pub/sub
EE
sort
of
live
distribution
of
stuff.
But
there's
a
real
sort
of
one
of
our
big
primitives
is.
We
are
very
snapshot
based
right
now,
so
the
short
answer
is
no
there's
no
support
for
anything
sort
of
live,
wise,
I.
Think
that's
a
great
point.
B
To
sort
of
start
this
conversation
we
have
to
sort
of
deal
with
a
problem
of
keeping
this
data
current
right,
and
so
we
kind
of
have
two
options
in
this
very
concrete
use
case
like
what
this
dude
said
is
doing
under
the
hood.
To
sort
of
get
into
the
details
of
it
is
this:
each
data
set
comes
with
something
called
a
transform
script,
and
so
we've
embedded
of
programming
syntax
into
query
called
the
star-lord.
That
looks
a
lot
like
Python
and
so
I
can
actually
pull
this
up
in
an
editor.
B
So
it's
a
little
easier
to
see,
and
so
you
can
actually
write
Python
code
that
explains
to
a
data
set,
how
to
update
itself,
which
is
a
very
useful
tool,
because
we've
now
bound
that
transformation,
script
to
the
data
set
itself
and
it
moves
around
with
the
data
set.
So
if
you
add
this
to
your
query,
node
and
this
moves
from
one
peer
to
another-
you
have
the
majority
the
details.
You
need
to
recreate
that
data
set
and
you
get
your
own
Update
button
to
rerun
this.
B
B
It's
not
really
gonna
help
our
DHT
stay
healthy,
and
if
it
is
it's
just
like
a
lot
of
excess
requests
that
we're
all
I
mean
to
serve,
and
it
would
probably
be
smarter
if
we
set
this
up
and
ran
it
and
had
it
sort
of
scheduled
to
automatically
update,
and
so
that's
sort
of
what
we've
built
at
query
most
recently
is
where
we're
now
we're
calling
it
a
fog
service,
because
we
think
it's
it's.
It
feels
like
a
fog
service,
and
so
what
this
means
is,
if
I
do
query
update
list
and
I.
B
Hopefully
you
can
see
this
makes
little
larger
and
I'm
gonna
wait
for
my
could
gear
to
figure
out:
okay
cool,
how
you
create
update
list.
This
is
a
list
of
datasets
that
are
scheduled
to
automatically
update.
So
I
can
see
that
the
third
item
here
says
that
in
23
hours,
I'm
going
to
rerun
this
shell
script,
and
if
we
look
at
these
shell
scripts,
where
is
that
shell
scripts?
Let's
find
it.
B
One
second
here:
if
that's
no,
it
can't.
So
this
is
the
shell
scripts
under
the
hood,
and
so
what
this
is
doing
is
this
is
running
on
my
computer,
and
this
is
something
that
query
is
scheduled
through
a
demonized
process
registered
with
my
operating
system
to
reach
into
my
machine
connect
to
kubernetes
over
a
set
of
secure
keys
that
I
control
and
it
don't
have
to
distribute.
With
the
data
set.
We
run
a
proxy
connection
to
a
Prometheus
instance,
which
is
going
to
provide
us
with
data.
B
We're
gonna,
wait
for
that
connection
to
sort
of
occur
and
then
we're
gonna
actually
use
we're
gonna
run
the
transform
script
that
depends
on
that
server
running
at
night
and
look
at
localhost
99
ingots.
If
there's
a
Prometheus
instance
that
I
can
access
its
running
this
data
and
that's
going
to
update
the
data
set
itself
and
that
will
then
publish
because
I've
included
this
publish
flag.
B
You
can
take
this
data
and
do
whatever
you
want
with
it
do
whatever
you
think
is
useful
or
relevant
with
it,
but
it's
structured
in
a
way
such
that
you
have
an
audit
trail,
and
so
you
can
actually
see
how
this
is
working
and
if
I've
done
my
job
correctly,
I've
annotated
everything
that
I
could've
metadata
that
lets
you
figure
out
what's
going
on
and
how
this
is
working,
I've,
sort
of
included
some
comments
and
the
transform
script
about.
What
sort
of
you
know
brief?
B
Thank
social,
okay,
amazing,
yes,
Jared
bugs
about
service,
but
yeah.
We
we
could
totally
talk
about
fog
and
mist
and
other
particles
in
water
or
I'm
into
all
of
those,
but
the
point
being
like.
We
also
include
these
visualizations
just
to
make
everything
sort
of
a
quick
and
easy,
and
this
will
just
update
itself
over
time.
The
last
thing
I
should
note
is
that
maybe
I
can
access
this
locally.
B
We
also
make
the
Gateway
available.
Also
I
should
do
that
on
a
DAT
yeah
I
always
get
that
wrong.
That
second
slash,
is
this
really
a
nightmare?
For
me?
5001
pardon
me.
So
if
we
actually
look
at
that,
this
is
the
actual
contents
of
the
data
set
itself,
and
so
you
can
actually
see
every
single
one
of
these
snapshots
is
a
individual
thingy
with
references
to
there
were
hashes.
This
is
how
we
do
comparison.
Work
yeah.
This
is
what's
going
on
here.
B
I
should
probably
stop
for
questions,
but
last
but
not
least,
we
do
actually
make
we've
done
a
bunch
of
work
to
like
make
sure
that
we're
sort
of
fully
interoperable
with
the
existing
sort
of
ipfs
ecosystem.
So
when
you're
running
query
Connect,
which
is
our
version
of
ipfs
daemon,
you
can
actually
get
to
the
web
UI,
and
this
is
kind
of
fun
that
you
can
actually
see
the
version
of
the
thing
registered
properly
and
it's
like
fully
there
in
ready
to
wall.
B
B
Cool
yeah
and
then
I
guess
in
terms
of
presentation
details
it's
easier
to
talk
through
some
of
this
stuff,
so
we've
yeah
over
over
this
sort
of
like
courses
of
getting
the
stuff
in
rolling
we've
been
this
sort
of
next
couple
of
months
are
going
to
be
an
exciting
time
in
query,
we've
finally
passed
a
very
important
milestone
for
us,
which
is
the
sort
of
back-end
features
of
building
and
managing
a
version.
Control
system
are
far
more
fleshed
out
than
they've
ever
been.
We
have
a
lot
of
work
to
do
on
documentation.
B
We
have
a
lot
of
work
to
do
on
tutorial,
writing
and
then
we
have
a
very,
very
big
overhaul
to
our
user
experience
front
inside
coming,
but
we're
very
happy
was
where
the
back
ends
at
we.
Now
that
we
have
this
capacity
to
auto,
update
and
auto
publish,
we
think
that
it
forms
a
really
exciting
sort
of
system
where
people
can
be
designing
these
datasets
on
their
own
and
though
they
are
their
own
source
of
authority
on
what
that
data
is
and
are
now
able
to
publish
that
sort
of
automatically.
B
And
ideally,
this
is
sort
of
helping
us
get
around
this,
like
the
two
nasty
problems
in
data
which
is
auditability
and
keeping
for
keeping
things
fresh,
and
so
we
think
of
this,
as
like
one
giant
sort
of
like
data
Bazaar
of
stuff
that
you
can
get
access
to
yeah.
Is
there
a
public
registry
of
all
the
datasets
that
people
who
publish
and
maintain
there
absolutely
is
a
public
registry
of
all
days?
That's
people
publish
it,
maintain
its
registry
query,
dot
IO!
B
Thank
you
for
the
wonderful
questions,
I
mean,
and
so
it
is
worth
digging
in
a
little
bit
for
this
ground
into
what
of
the
registry
is
for
us
a
registry.
Is
we
maintain
two
things
there,
which
is
just
we
enforce
unique,
p
r--
names
there,
so
you
could
see
Mike.
Your
name
is
b5
and
C
that
is
actually
negotiated
with
the
registry,
which
is
a
centralized
system
in
relation
to
a
decentralized
system.
It
also
handles
search
for
us.
B
If
you
run
query
search,
that's
that's
going
to
the
registry
working
on
distributed
search,
but
that's
obviously
like
a
a
very
experimental
part
of
the
neck
of
the
woods.
And
finally,
what
it
does
is
the
registry
we
where,
if
you
sort
of
think
about
this
as
a
pair
lentil
of
what
github
just
recently
launched
in
terms
of
their
repo
availability,
our
registry
is
just
a
backing
layer
for
the
distributed
system.
B
So
all
we're
dealing
with
when
the
query
registry
is
we're
just
keeping
hashes
available
so
that
when
you
close
your
laptop
that
hash
that
you
published
is
automatically
there,
and
so
it
looks
a
lot
more
like
a
get
get
out
system
where
you
have
is
sort
of
just
hosting
your
git
repository.
This
is
very
similar
to
look
where
he's
doing
it's
just
hosting
the
copy
of
your
hashes
query.
Registry
has
no
capacity
to
publish
anything
on
your
behalf.
B
All
commits
are
signed
with
a
special
key
pair
that
is
provisioned
according
to
every
users,
query
note
it's
different
from
your
IP
FS
PR
ID,
mainly
so
that
you
can
have
many
yet
profess
machines
and
use
the
same
profile.
But
is
there
a
concept
of
forking?
Yes,
forking
is
just
like
the
de-facto
way
that
things
work.
If
you
run
query
ad
on
somebody
else's
data
set,
and
then
you
edit,
that
data
set
it
Forks,
and
so
now
it's
now
it's
just
your
own
and
that
automatically
is
sort
of
set
up
for
you.
B
We
haven't
figured
out
the
merge.
We
know
technically
how
we're
gonna
do
like
pull
requests,
but
we
haven't
actually
written
any
of
that
code.
Yet,
oh,
the
biggest
thing
in
queer,
that's
different
from
a
git
repository
is
the
data
model
is
set
inside
of
a
get
data
instead
of
a
query
data
set,
so
you
can
merge
any
two
data
sets.
You
can
compare
any
two
data
sets
at
any
given
point,
because
there's
no
confusion
about
what
file
is
where
we
know
exactly
what
each
file
is
supposed
to
do.
If
that
makes
sense,
and.
C
B
I
mean
I
can
just
share
really
quickly.
We
have
a
shortcut
for
that.
We
call
it
recall
so
I
can
do
queries,
save
recall,
TF
and
that
will
just
pull
the
transformation
out
of
my
history
from
one
time
back
now,
if
I
wanted
to
go
back
to
transformations
second
recalled
tilde,
so
me
get
fests
no
camps.
This
would
error
because
there
isn't
a
transformation.
Two
histories
back,
if
you're
thinking
it's
kind
of
a
funny
way
to
think
about
your
thinking
across
the
possible
versions
of
it.
B
Is
that
if
I
do
this,
this
will
no
I
can't
do
TF
one.
Obviously
that
doesn't
work
alright,
because
my
Prometheus
server
isn't
turned
on
but
recall
works
recall.
Is
the
thing
that,
under
the
hood,
if
we
do
query
update,
run
if
a
snowed
count
that
will
do
the
exact
same
thing.
It's
just
that
alias
for
recall
the
last
transform
scripts,
because
it's
so
common
to
do.
B
Yeah
on
that
note,
do
you
need
custom
code?
How
query
interacts
a
new
dataset
is
a
pretty
auto
magical
yeah.
So
the
there
are
two
really
oughta
magical
parts
inside
of
the
transformations
we
have
something
called
download
and
something
called
transform,
which
are
two
special
functions
that
you
define
inside
of
query
and
the
the
signature
of
downloaded
transform
is
D
s
and
then
a
context,
and
so
D
s
just
gives
you
the
last
version
of
the
dataset,
and
so
you
can
see
how
cool
you
know
we
had.
B
You
know
this
was
the
body
and
I
can
examine
that,
and
this
works
really
well
for
appending
locked.
So
you
could
say:
hey
this.
My
dataset
had
these
ten
entries
go
to
the
last
entry
start
the
date
stamps
from
there
and
then
the
so.
Those
two
functions,
if
you
define
them,
query
calls
them
for
you,
and
so
those
are
automatically
called
in
the
background.
But
if
you
define
none
of
them,
that
nothing
will
happen
because
your
transform
isn't
doing
anything
special.
B
If
that
makes
sense,
then
it's
the
question
and
then
yes,
so
moving
on
to
Johnny
crutches,
how
do
you
handle
semantics
in
draw
between
json-ld?
This
is
a
great
question.
Johnny
that
I
love
digging
in
on
so
JSON.
Our
json-ld
support
is
planned.
We
currently
support
D
cat
as
a
raw
specification
and
RDF
as
export
formats.
So
queries
query
has
semantic
understanding
of
your
data
with
a
couple
of
caveats,
because
we
have
to
get
a
little
specific
about
what
the
word
semantics
means.
B
B
Titles
of
all
of
your
data
sets
that's
all
doable,
but
what
we
want
to
get
to
is
for
query
to
understand
that
when
I
say
population
I'm
talking
about
account
of
people
and
that's
a
big
long
conversation
where
romantic
mapping
things
that
humans
on
two
things,
the
machines
know,
which
is
a
big
messy
problem
and
then,
finally,
are
you
working
with
governments
in
their
Open
Data
initiatives?
Yes,
we
work
with
governments
allies,
particularly
New
York.
B
We
work
a
lot
on
archiving
climate
data,
which
is
also
another
sort
of
like
portion
sector
that
tends
to
get
stale
and
tends
to
be
difficult
to
sort
of
like
keep
keep
your
sort
of
finger
on
the
pulse
of
so
we
do
I
think
the
people
who
gravities
work
we're
the
most
early
have
been
merely
in
the
civic
tech
sector.
We've
had
a
lot
of
interest
from
that
side.
A
B
That
I
know
is
incorrect
and
there's
no
method
to
have
that
dialog
right
now,
and
so
a
number
of
places
that
were
very
excited
is
someone
can
make
an
auditable
upgrade
right.
I
can
take
that
data.
Prove
that
exactly
what
I
changed
is
this
stuff,
and
only
this
stuff?
We
didn't
really
get
into
query
sniffing
tools
today,
but
we
have
a
giant
differ
that
allows
you
to
do
structured
data
and
I
can
sort
of
show
you
hey.
C
B
Yes,
totally
when
I
make
like
a
fancy
nodes
dashboard.
Yes,
you
can
totally
make
a
fancy
notes
password,
so
everything
you're
seeing
on
the
front
end
is
leveraging
a
JSON
API,
which
is
available
when
you
run
for
a
connect,
so
that's
locally
available.
We
don't
make
a
like
hosted
version
of
that
available
because
we
really
want
you
to
use
the
distribute
tools,
but
so
that's
totally
possible
and
we're
I
shouldn't
be
talking
about
this
yet,
but
we're
we're
inches
from
graph
QL
sort
of
like
looking
like
it's
a
possibility.
B
I
can't
it's
gonna
take
us
like
six
months,
but
we
think
that
you
will
be
able
to
turn
the
entire
query
Network
into
a
graph
QL
thing,
which
would
be
fun
getting
to
identity
and
management
of
identity
and
private
data
sets,
which
I
think
are
sort
of
hand-in-hand.
Those
are
this
sort
of
like
really
what
constitutes
our
goals
for
the
next
eight
months
to
a
year.
B
It's
an
area
of
major-league
active
research.
What
we
have
today
is
you
provision
just
a
basic
cryptographic,
key
pair,
that
almost
exactly
resembles
your
pure
ID,
so
you
have
a
composite
private
key
pair
and
that's
how
we
manage
identity
on
query.
For
now
that
makes
sense
yeah.
When
you
read
the
process
of
setting
up
a
Kirino,
does
he
just
run
crate
set
up
and
that
provisions
of
to
keep
errors?
B
That
makes
make
sure
you
have
an
IP
MS
repo
and
to
make
sure
you
have,
and
then
it
makes
you
a
query
keeper
and
when
you
choose
a
peer
name,
we
just
registered
the
public
key
with
the
registry
and
say
well:
can
you
prove
this
and
then
there's
a
simple
proving
ceremony,
and
then
you
can
claim
an
identifier
and
then
we
make
sure
you
can't
claim
too
many
in
an
hour.
That's
about
it
awesome.
Thank
you
so
much
for
listening
everyone
does
anyone
have
any
other
questions?
No
question
is
it's
this.
B
A
B
Will
be
at
I
feel
a
scamper
I'm
very
excited
to
share
a
lot
of
how
a
lot
of
what
I've
just
shown
you
is
sort
of
the
guts
under
the
hood-
have
are
some
things
that
we're
really
hoping
to
sort
of
port
back
into
the
ipfs
community
and
yeah
we're
really
looking
for
them.
Many
chats
that
I
feel
that's
gap
just
generally
accept.
A
That's
awesome:
we
can't
wait
to
learn
more
and
thank
you
once
again.
Thank
you
very
much
for
taking
a
time
out
to
explain,
query
and
a
wonderful
work
that
you're
doing
and
for
everyone
else
for
the
IPS
weekly
meeting.
We
are
not
going
to
have
a
weekly
meeting
next
week
in
a
week
after
because
of
ipfs
camp,
but
we
will
continue
again,
I
believe
the
beginning
of
July,
so
thank
you
very
much.
I
will
put
out
an
issue
about
this,
and
I
will
see
everyone
in
the
future.
The
next
July.