►
From YouTube: IPFS Weekly Call June 10th, 2019 🙌🏽📞
Description
IPFS newsletter: https://tinyletter.com/ipfsnewsletter
A
Hello
and
welcome
to
the
ipfs
weekly
call
when
we
get
to
learn
about
the
amazing
stuff,
that's
being
built
on
top
of
IP
FS.
Today,
we're
going
to
hear
from
Brendan
who
is
CEO
of
using
star,
that's
being
built
on
top
and
Qri
is
a
peer-to-peer
tool
which
helps
people
deal
and
handle
and
share
data.
So
Brendan
I'm
going
to
let
you
take
it
away.
B
Amazing
Thank
You
Portia
thanks
everybody
for
coming
out
to
a
weekly
call
yeah.
So
as
force'
mentioned,
I
work
at
query
and
we
are
we're
trying
to
do
data
science
in
a
new
way.
We
think
that
we
like
to
like
to
call
the
thing
we're
building
the
data
Bazaar,
which
is
very
similar
to
like
the
data
or
we
use
the
sort
of
software
metaphor
of
like
we
have
the
cathedral
style
of
building
software
and
the
bazaar.
B
That
ipfs
really
helps
provide
a
super,
solid
foundation
for,
and
so
when
we
mean
beta
bazaar,
we've
sort
of
we're
looking
for
some
very
key
characteristics.
We
want
to
see
something
where
it's
a
two-way
conversation
where
anybody
can
sort
of
like
give
and
take
whatever
data
they
want.
Where
that
conversation
is
meaningful,
so
it's
sort
of
structured
in
a
way
that
everybody
can
sort
of
understand
what
everyone's
talking
about
I
think
if
you've
ever
worked
with
github
or
any
of
this
sort
of
like
open
source
collaboration
tools,
you
have
the
feeling
for
what
this
means.
B
You,
you
sort
of
know
like
I'm,
going
to
create
a
pull
request
and
a
pull
request
is
a
request
for
someone
to
change
the
way
your
code
works
in
that
code.
There's
a
process
for
auditing
that
and
then
the.
Finally,
you
just
need
this
capacity
to
attribute
all
of
these
changes
back
to
the
people
who
made
them
so
that
as
you're
collaborating
you
have
this
sort
of
like
audit
trail.
This
is
what
we
get
a
lot
from
the
sort
of
whole
world
of
version
control
and
in
software.
B
Unfortunately,
when
you
go
from
software
to
data
things
change
a
lot,
it's
been,
software
is
not
data.
There,
they're,
not
interchangeable
things
and
the
biggest
thing
that
changes
is
volume
right.
So,
if
you
think
about
your
average
github
repository,
it's
not,
it
rarely
exceeds
a
gigabyte
of
space
even
with
this
entire
history
and
sort
of
like
this
whole
thing
unless
you're
developing
a
theorem,
but
that's
a
whole
nother
conversation.
B
But
the
sort
of
like
when
you
move
over
to
the
data
space
like
a
gig
is
very
normal
right
and
versioning
is
a
much
different
conversation
when
you're
versioning
data,
because
often
you're
taking
a
single
file
and
you're
making
individual
changes
to
a
single
file.
So
imagine
your
massive
CSV
file
with
lots
and
lots
of
stuff
in
it
and
you're,
just
making
small
edits
to
that.
B
You
need
to
be
able
to
collaborate
on
it
and
so
you'll
be
zoom
out
for
a
second,
we
think
about
ipfs,
and
what
ipfs
is
a
lot
of
the
things
that
ipfs
does
are
actually
like
a
perfect
starting
point
for
building
a
dataset
version
control
system
versus
what
query
is,
if
you
think
about
an
ipfs
hash,
that
is
a
bunch
of
files
broken
up
into
blocks.
We're
specifically
referring
to
unix
FX
v1
hash
is
a
reference
point
for
anybody.
B
Yes,
yes,
yes,
ok,
cool
yeah!
So
today
let
me
just
sort
of
start
with
I'm
gonna
share
my
screen,
I'm
just
gonna
share
my
whole
desktop,
and
hopefully
my
desktop
is
not
too
muddy.
All
right.
That's
straight
to
code
land
that
wasn't
smart
cool.
So
here
is
an
answer
to
your
question.
If
we
sort
of
like
zoom
in
on
when
was
this
Monday
June
10th?
So
if
we,
if
we
count
the
unique
number
of
peers
by
our
there,
were
three
thousand
four
hundred
forty
six
and
then
I
have
to
move
this
window
I'm.
B
Sorry,
there
were,
if
we
look
at
it
over
the
over
the
last
day,
there
were
six
thousand
unique
seen
over
the
last
day,
and
so
what
you're
looking
at
is
a
visualization
of
a
query
data
set,
and
so
this
is
sort
of
the
overview
of
it.
Is
that-
and
we
think
about
this
very
similar,
like
a
github
repo,
but
our
notion
of
a
addition
is
far
more
granular
than
a
repo.
You
would
have
many
data
sets
many
more
than
you
would
have
repos.
But
the
thing
that's
really
important
about
query.
B
Is
it's
a
very
structured
conversation
so
I'm
just
showing
you
a
visualization,
oh
cool,
we
did.
We
did
great.
Oh
yeah
I
have
to
run
this
in
the
background.
Shh
Democrats
aren't
being
nice
to
me
today,
but
actually
one
of
this
is
a
demonstration
of
connecting
to
the
distributed
web
and
at
the
same
time
it
also
serves
up
for
me
a
local,
my
dataset.
But
if
we
look
at
this
sort
of
the
thing,
that's
most
important
about
query
is
like
to
try
and
normalize
this
conversation
around
data
and
the
way
we've
done.
B
This
is
we've
developed
a
there's,
a
document
model
where
everything
is
structured.
The
exact
same
way
when
we're
talking
about
data
sets
the
actual
contents
of
a
dataset
so
like.
If
we
think
about
like
the
CSV
file
and
it
data
set,
we
call
that
the
body
and
we're
sort
of
working
very
similar
to
the
way
that
HTML
documents
work.
All
of
our
metadata
is
sorted,
something
called
meta
and
then
meta
and
structure
and
transform
sort
of
collectively
are
referred
to
as
the
head.
But
this
is
the
actual
data.
B
So,
if
at
any
given
point,
you
can
literally
just
pull
this
hash
off
of
my
PFS
and
we'll
let
this
load
in
the
background,
while
we
sort
of
resolve
that
I
could
resolve
that
locally,
but
anyways,
the
data
itself
is
actually
right
there
and
always
accessible
to
you.
So
at
any
given
point
you
can
go
to
this
hash,
slash
body,
dot,
CSV,
and
you
will
see
this
data
actually
in
this
case
I.
Think
it's
a
JSON
data
set
yeah.
B
This
is
in
JSON,
so
it
would
be
body
got
JSON,
but
we
sort
of
get
what
we
need
to
sort
of
start.
This
data
bizarre
conversation
I,
like
I,
have
created
this
data
set
and
it
has
in
a
recurring
history
of
changes
over
time
and
each
one
of
these
is
built
exactly
the
way
they
get
is
built
where
each
snapshot
each
data
set
is
a
snapshot
that
refer
references,
it's
prior
one.
B
So
this
is
the
initial
one
and
then
sort
of
moving
up
forward
as
we
make
all
of
these
changes
and
every
single
change
is
tracked
and
every
single
change
is
attributed,
and
all
of
this
is
every
single
time,
we're
sort
of
writing
this
down
as
an
ipfs
hash
and
moving
that
around
as
we
need
it,
but
we
have
a
chat
one.
Second,
sir
sports:
let's
do
something
is
there
live
support
for
live
data
sets
as
of
right.
B
Now
we
sort
of
think
of
that
as
a
separate
set
of
concerns,
eventually
we'll
get
into
the
sort
of
pub
sub
EE
sort
of
live
distribution
of
stuff.
But
there's
a
real
sort
of
one
of
our
big
primitives
is.
We
are
very
snapshot
based
right
now,
so
the
short
answer
is
no
there's
no
support
for
anything
sort
of
live,
wise,
I.
Think
that's
a
great
point
to
sort
of
start.
B
This
conversation
we
have
to
sort
of
deal
with
the
problem
of
keeping
this
data
current
right,
and
so
we
kind
of
have
two
options
in
this
very
concrete
use
case
like
what
this
data
set
is
doing
under
the
hood.
To
sort
of
get
into
the
details
of
it
is
this:
each
data
set
comes
with
something
called
a
transform
script,
and
so
we've
embedded
a
programming
syntax
into
query
called
the
star
log.
B
That
looks
a
lot
like
Python
and
so
I
can
actually
pull
this
up
in
an
editor
versus
a
little
easier
to
see,
and
so
you
can
actually
write
Python
code
that
explains
to
a
dataset
how
to
update
itself,
which
is
a
very
useful
tool,
because
we've
now
bound
that
transformation,
script
to
the
data
set
itself
and
it
moves
around
with
the
data
set.
So
if
you
add
this
to
your
query,
node
and
this
moves
from
one
peer
to
another-
you
have
the
majority
the
details.
B
You
need
to
recreate
that
data
set
and
you
get
your
own
Update
button
to
rerun
this.
But
then,
in
this
special
case
we
have
this
sort
of
like
sticky
problem
of
like
star,
lock,
doesn't
know
anything
about
it.
So
we
have
to
like
deal
with
some
of
these
sandboxing
issues
and,
most
importantly,
there's
the
way
that
we're
actually
grabbing
these
metrics
is.
B
Hopefully
you
can
see
this
a
little
larger
and
I'm
gonna
wait
for
I
could
gear
to
figure
out.
Okay.
Well,
you
query
update
list.
This
is
a
list
of
datasets
that
are
scheduled
to
automatically
update.
So
I
can
see
that
the
third
item
here
says
that
in
23
hours
I'm
going
to
rerun
this
shell
scripts.
And
if
we
look
at
these
shell
scripts.
Where
is
that
shell
scripts?
Let's
find
it.
B
One
second:
here
it's
no
account.
So
this
is
the
shell
scripts
under
the
hood,
and
so
what
this
is
doing
is
this
is
running
on
my
computer,
and
this
is
something
that
query
is
scheduled
through
demonised
process
registered
with
my
operating
system
to
reach
into
my
machine
connect
to
kubernetes
over
a
set
of
secure
keys
that
I
control
and
it
don't
have
to
distribute
with
the
data
set.
B
We
run
a
proxy
connection
to
a
Prometheus
instance,
which
is
going
to
provide
us
with
data,
we're
going
to
wait
for
that
connections
that
sort
of
occur
and
then
we're
gonna,
actually
use
we're.
Gonna
run
the
transform
script
that
depends
on
that
server
running
at
night
and
look
at
logo
host
99
hits
if
there's
a
Prometheus
instance
that
I
can
access
it's
running
this
data
and
that's
going
to
update
the
data
set
itself
and
that
will
then
publish
because
I've
included
this
published
flag.
B
This
will
then
automatically
push
that
data
set
up
to
queries
cloud
backup,
and
so
we
run
something
called
a
registry
which
keeps
all
of
your
ipfs
hashes
and
datasets
live
on
the
distributed
web
and
so
every
24
hours.
This
is
gonna
run,
and
so
every
24
hours
we'll
get
new
data
being
pushed,
and
so
all
you
have
to
do
if
you
want
to
get
this
data
is
visit,
is
follow.
B
B
Thank
social,
ok,
amazing.
Yes,
they
Jared
bugs
a
bad
service,
but
yeah
we
we
could
totally
talk
about
fog
and
mist
and
other
particles
of
water
I'm
into
all
of
those,
but
the
point
being
like.
We
also
include
these
visualizations
just
to
make
everything
sort
of
a
quick
and
easy,
and
this
will
just
update
itself
over
time.
The
last
thing
I
should
note
is
that
maybe
I
can
access
this
locally.
B
We
also
make
the
Gateway
available.
Also
I
should
do
that
on
8080,
yeah
I
always
get
that
wrong.
That
second
slash,
is
this
really
a
nightmare?
For
me,
five
thousand
one
part
of
me.
So
if
we
actually
look
at
that,
this
is
the
actual
contents
of
the
data
set
itself,
and
so
you
can
actually
see
every
single
one
of
these
snapshots
is
a
individual
thingy.
With
references
to
there
were
ashes.
This
is
how
we
do
comparison.
Work
yeah.
This
is
what's
going
on
here.
B
I
should
probably
stop
for
questions,
but
last
but
not
least,
we
do
actually
make
we've
done
a
bunch
of
work
to
like
make
sure
that
we're
sort
of
fully
interoperable
with
the
existing
sort
of
my
PFS
ecosystem.
So
when
you're
running
query
connects,
which
is
our
version
of
ipfs
daemon,
you
can
actually
get
to
the
web
UI,
and
this
is
kind
of
fun
that
you
can
actually
see
the
version
of
the
thing
registered
properly
and
it's
like
fully.
B
There
are
no
need
of
all
we
can
explore
our
files
and
stuff,
and
so
this
is
all
thanks
to
the
wonderful
work
happening,
particularly
instead
of
going
PFS
to
sort
of
make
this
really
easy
for
us
to
sort
of
bolt
in
so
yeah.
Maybe
I'll
stop
there
for
questions
and
see.
What's
going
on
or
read,
this
chat.
B
Cool
yeah
and
then
I
guess
in
terms
of
presentation
details
it's
easier
to
talk
through
some
of
this
stuff,
so
we've
yeah
over
over
this
sort
of
like
courses
of
getting
this
up
in
rolling
we've,
been
this
sort
of
next
couple
of
months
are
going
to
be
an
exciting
time
in
query,
we've
finally
passed
a
very
important
milestone
for
us,
which
is
the
sort
of
back-end
features
of
building
and
managing
a
version.
Control
system
are
far
more
fleshed
out
than
they've
ever
been.
We
have
a
lot
of
work
to
do
on
documentation.
B
We
have
a
lot
of
work
to
do
on
tutorial,
writing
and
then
we
have
a
very,
very,
very
big
overhaul
to
our
user
experience
front
inside
coming,
but
we're
very
happy
was
where
the
back
ends
at
we.
Now
that
we
have
this
capacity
to
auto,
update
and
auto
publish,
we
think
that
it
forms
a
really
exciting
sort
of
system
where
people
can
be
designing.
These
data
sets
in
their
own
and
though
they
are
their
own
source
of
authority
on
what
that
data
is
and
are
now
able
to
publish
that
sort
of
automatically.
B
And
ideally,
this
is
sort
of
helping
us
get
around
this,
like
the
two
nasty
problems
in
data
which
is
auditability
and
keeping
from
keeping
things
fresh,
and
so
we
think
of
this,
as
like
one
giant
sort
of
like
data
Bazaar
of
stuff
that
you
can
get
access
to
yeah.
Is
there
a
public
registry
of
all
the
datasets
that
people
to
maintain
their
absolutely
is
a
public
registry
of
all
days?
That's
people
publish
it,
maintain
its
registry
query,
dot
IO!
B
Thank
you
for
the
wonderful
questions,
I
mean,
and
so
it
is
worth
digging
in
a
little
bit
for
this
crowd
into
what
of
the
registry
is
for
us
a
registry.
Is
we
maintain
two
things
there,
which
is
just
we
enforce
unique,
appear
names
there,
so
you
could
see
Mike.
Your
name
is
B
five
and
C.
That
is
actually
negotiated
with
the
registry,
which
is
a
centralized
system
in
relation
to
a
decentralized
system.
It
also
handles
search
for
us.
B
If
you
run
query
search,
that's
that's
going
to
the
registry
we're
working
on
distributed
search,
but
that's
obviously
like
a
a
very
experimental
part
of
the
neck
of
the
woods.
And
finally,
what
it
does
is
the
registry.
We
we're,
if
you
sort
of
think
about
this
as
a
parallel
to
love
what
github
just
recently
launched
in
terms
of
their
repo
availability.
Our
registry
is
just
a
backing
layer
for
the
distributed
system.
B
So
all
we're
dealing
with
when
the
query
registry
is
we're
just
keeping
hashes
available
so
that
when
you
close
your
laptop
that
hash
that
you
published
is
automatically
there,
and
so
it
looks
a
lot
more
like
a
get
get
out
system
where
github
is
sort
of
just
hosting
your
git
repository.
This
is
very
similar
to
look
where
he's
doing
it's
just
hosting
the
copy
of
your
hashes
query.
Registry
has
no
capacity
to
publish
anything
on
your
behalf.
B
All
commits
are
signed
with
a
special
key
pair
that
is
provisioned
according
to
every
users,
query
note
it's
different
from
your
IP
FS
PR
ID,
mainly
so
that
you
can
have
many
of
your
best
machines
and
use
the
same
profile.
But
these
are
concept
of
forking.
Yes,
forking
is
just
like
the
de-facto
way
that
things
work.
If
you
run
query
ad
on
somebody
else's
data
set,
and
then
you
edit,
that
data
set
it
Forks,
and
so
now
it's
now
just
your
own
and
that
automatically
is
sort
of
set
up
for
you.
B
We
haven't
figured
out
the
merge.
We
know
technically
how
we're
gonna
do
like
pull
requests,
but
we
haven't
actually
written
any
of
that
code.
Yet,
oh,
the
biggest
thing
in
queer,
that's
different
from
a
git
repository
is
the
data
model
is
set
inside
of
instead
of
a
query
data
set,
so
you
can
merge
any
two
data
sets.
You
can
compare
any
two
data
sets
at
any
given
point,
because
there's
no
confusion
about
what
file
is
where
we
know
exactly
what
each
file
is
supposed
to
do.
There's
that
make
sense
and
I.
C
B
I
mean
I
can
just
share
really
quickly.
We
have
a
shortcut
for
that.
We
call
it
recall
so
I
can
do
queries,
save
recall,
TF
and
that
will
just
pull
the
transformation
out
of
my
history
from
one
time
back
now.
If
I
wanted
to
go
back
to
transformations
that
can
recall
tilde
to
sue
me,
I
PFS
know
camps.
This
would
error
because
there
isn't
a
transformation
to
history's
back
if
you're
thinking
it's
kind
of
a
funny
way
to
think
about
you're
thinking
across
the
possible
versions
of
it.
Is
that
so
I
do
this.
B
This
will
no
I
can't
do
GF
one.
Obviously
that
doesn't
work
alright,
because
my
Prometheus
server
isn't
turned
on
but
recall
works
recall.
Is
the
thing
that,
under
the
hood,
if
we
do
query
update,
run
if
a
snowed
count
that
will
do
the
exact
same
thing.
It's
just
that
alias
for
recall
the
last
transform
scripts,
because
it's
so
common
to
do.
B
Yeah
on
that
note,
do
you
need
custom
code?
How
query
interacts
dude
is
that?
Is
it
pretty
Auto
magical
yeah?
So
the
there
are
two
really
auto
magical
parts
inside
of
the
transformations
we
have
something
called
download
and
something
called
transform,
which
are
two
special
functions
that
you
define
inside
of
query
and
the
the
signature
of
downloaded
transform
is
d
s
and
then
a
context,
and
so
d
s
just
gives
you
the
last
version
of
the
data
set,
and
so
you
can
see
how
cool
you
know
we
had.
B
You
know
this
was
the
body
and
I
can
examine
that,
and
this
works
really
well
for
append-only
locked.
So
you
could
say:
hey
this.
My
dataset
had
these
ten
entries
go
to
the
last
entry
start
that
they'd
stamps
from
there
and
then
the
so
those
two
functions
if
you
define
them
where
he
calls
them
for
you,
and
so
those
are
automatically
called
in
the
background.
But
if
you
define
none
of
them,
that
nothing
will
happen
because
your
transform
isn't
doing
anything
special.
B
If
that
makes
sense,
then
it's
the
question
and
then
yes,
so
moving
on
to
Johnny
crutches,
how
do
you
handle
semantic
and
drawing
of
the
data
between
json-ld?
This
is
a
great
question.
Johnny
that
I
love
digging
in
on
so
Jason.
Our
json-ld
support
is
planned.
We
currently
support
D
cat
as
a
raw
specification
and
RDF
as
export
formats.
So
queries
query
has
semantic
understanding
of
your
data
with
a
couple
of
caveats,
because
we
have
to
get
a
little
specific
about
what
the
word
semantics
means.
B
If
you
have
a
column,
that's
level
labeled
population
query
doesn't
know
that.
That's
a
count
of
people
write
that,
for
that
you
would
mean
some
sort
of
specification
like
RDF,
for
something
like
json-ld,
where
you're
actually
talking
specifically
about
the
same
thing,
I'm
very
excited
that
ipfs
camped
to
sort
of
get
into
a
big
conversation
with
the
IPL.
Do
you
team
about
graphing
and
linking
schemas
in
a
way
that
could
be
represented
as
json-ld
but
I
think
that's
an
area
of
future
research
for
us,
but
at
a
base
layer?
B
B
Titles
of
all
of
your
data
sets
that's
all
doable,
but
what
we
want
to
get
to
is
for
query
to
understand
that
when
I
say
population
I'm
talking
about
account
of
people
and
that's
a
big
long
conversation
where
romantic
mapping
things
that
humans
about
wanted
things,
the
machines
know
which
is
a
big
messy
problem
and
then,
finally,
are
you
working
with
governments
in
their
Open
Data
initiatives?
Yes,
we
work
with
governments
allies,
particularly
New
York.
We
have
a
lot
of
great,
very
productive
conversations
here
with
the
city
of
New
York.
B
It
has
been
a
great
partner,
we're
also
working
a
bunch
at
the
sort
of
international
level
on
the
UN
sustainable
development
goals,
which
is
also
another
source
of
a
majorly
stale
data
that
we
work
on
a
bunch
and
also
through
my
work
at
the
environmental
data
and
governance
initiative.
We
work
a
lot
on
archiving
climate
data,
which
is
also
another
sort
of
like
portion
sector.
They
tends
to
get
stale
and
tends
to
be
difficult
to
sort
of
like
keep
keep
your
sort
of
finger
on
the
pulse
of.
B
A
B
That
I
know
is
incorrect
and
there's
no
method
to
have
that
dialog
right
now,
and
so
a
number
of
places
that
were
very
excited
is
someone
can
make
an
auditable
upgrade
right.
I
can
take
that
data.
Prove
that
exactly
what
I
changed
is
this
stuff,
and
only
this
stuff?
We
didn't
really
get
into
query
sniffing
tools
today,
but
we
have
a
giant
differ
that
allows
you
to
do
structured
data
and
I
can
sort
of
show
you
hey.
C
B
Yes,
totally
when
I
make
like
a
fancy
nodes
dashboard.
Yes,
you
can
totally
make
a
fancy
notes
password,
so
everything
you're
seeing
on
the
front
end
is
leveraging
a
JSON
API,
which
is
available
when
you
ring
for
a
connect,
so
that's
locally
available.
We
don't
make
a
like
hosted
version
of
that
available
because
we
really
want
you
to
use
the
distribute
tools,
but
so
that's
totally
possible
and
where
I
shouldn't
be
talking
about
this
yet,
but
we're
we're
inches
from
graph
QL
sort
of
like
looking
like
it's
a
possibility.
B
I
can't
it's
gonna
take
us
like
six
months,
but
we
think
that
you
will
be
able
to
turn
the
entire
query
Network
into
a
graph
QL
thing,
which
would
be
fun
getting
to
identity
and
management
of
identity
and
private
data
sets,
which
I
think
are
sort
of
hand
in
hand.
Those
are
the
sort
of
like
really
what
constitutes
our
goals
for
the
next
eight
months
to
a
year.
B
What
we
have
today
is
you
provision
just
a
basic
cryptographic,
key
pair,
oh,
that
almost
exactly
resembles
your
pure
ID,
so
you
have
a
couple
public/private
key
pair
and
that's
how
we
manage
identity
on
query
for
now
that
make
sense
yeah.
When
you
read
the
process
of
setting
up
a
Kirino,
does
he
just
run
crazy
setup
and
that
provisions
of
to
keep
errors?
B
That
makes
make
sure
you
have
an
active
mastery,
though,
and
to
make
sure
you
have,
and
then
it
makes
you
a
query,
keep
error
and
when
you
choose
a
peer
name,
we
just
register
the
public
key
with
the
registry
and
say
well:
can
you
prove
this
and
then
there's
a
simple
proving
ceremony
and
then
you
can
claim
an
identifier
and
then
we
make
sure
you
can't
claim
too
many
in
an
hour.
That's.
B
Awesome,
thank
you
so
much
for
listening
everyone.
Does
anyone
have
any
other
questions?
No
questions
to
this.
This
is
a
fun
crowd
because
we
sort
of
meld
no
ipfs.
So
we
can
sort
of
talk
very,
very
technically,
but
then
query
is
very
much
aimed
at
folks.
You
like
don't
know
ipfs
and,
like
you,
spend
a
lot
of
time
just
working
with
data,
and
so
it
sort
of
can
shift
very
quickly
between
two
conversations.
A
B
Will
be
at
I
feel
s
camera
very
excited
to
share
a
lot
of
how
a
lot
of
what
I've
just
shown
you
sort
of
the
guts
under
the
hood
have
are
some
things
that
we're
really
hoping
to
sort
of
pork
back
into
the
ipfs
community
and
yeah
we're
really
looking
for
them.
Many
chats
that
I
feel
that's
gay
just
generally
accept
that's.
A
Awesome
we
can't
wait
to
learn
more
and
thank
you
once
again.
Thank
you
very
much
for
taking
a
time
out
to
explain,
query
and
a
wonderful
work
that
you're
doing
and
for
everyone
else
for
the
IPO
first
weekly
meeting.
We
are
not
going
to
have
a
weekly
meeting
next
week
in
a
week
after
because
of
ipfs
camp,
but
we
will
continue
again,
I
believe
the
beginning
of
July,
so
thank
you
very
much.
I
will
put
out
an
issue
about
this,
and
I
will
see
everyone
in
the
future
of
the
next
July.