►
From YouTube: IPFS All Hands 🙌🏽📞 Nov 20, 2017
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
all
right
we're
recording
this
is
the
ipfs
All
Hands
call
for
Monday
November
20th
2017,
and
we
have
21
people
on
right
now
and
to
begin
with
or
to
me,
if
you
have
any
agenda
items,
we
already
have
a
couple
go
ahead
and
Adam
looks
like
we
have
four
demos
alright,
so
we
might
have
to
pine
box
people's
demos,
but
it's
great
to
have
so
many
demos
happening
this
week.
Oh
is
there
if
there's
anybody
new
who
wants
to
say
hi?
Is
anyone
interested
in
saying
hi?
B
Hello,
my
name
is
stanislav
I
come
from
Poland
I
recently
came
up
with
this
little
project,
which
is
called
a
miracle
share.
It's
a
basic
base
to
be
known,
ipfs.
It
works
pretty
well
because
it's
very
simple:
it
encrypts
your
data.
If
you
want
it
to,
and
it's
got
some
basic
functionality
I'd
like
to
share
with
you,
if
you,
if
you,
if
you
want
to
spare
the
time
cool.
A
B
A
C
A
Okay,
welcome
Rico,
alright
last
chance
for
anyone
to
say
who
want
to
say:
hi
you
don't
have
to,
but
you're
welcome
to
okay,
all
right.
Well,
then,
let's
jump
into
the
agenda
all
right.
So
first
item
Oh,
remember
to
put
your
name
on
the
on
your
agenda.
So
we
know
who's
who
looks
like
to
be
I
has
some
improvement.
This
print
talker.
D
Just
out
of
my
name,
the
ones
I
Jenna,
so
just
quick
note
on
sprint.
Helper
I
just
saw
that
sprint
helper
right
now,
he's
posting
the
time
I
was
like
4
p.m.
UTC,
which
I
think
mislead.
Some
people
I
see
like
there's
a
comment
here
from
Ross
Jack
Ross
Jack
that
thought
that
a
lens
was
1
hour
ago
and
I'm
not
entirely
sure
how
to
applet
sprint
helper
or
where
it
calls
for
it
is
so.
D
A
A
A
D
So
I
just
opened
the
issue
for
the
next
release:
objective
s
and,
as
usual,
I
always
created
a
very
long
issue.
Lots
of
highlights-
probably
there
is
even
more
than
what
I
have
already
listed.
So
if
I
missed
something.
Please
do
let
me
know,
but
also
please
do
like
check
out
the
issue.
There
is
performance
improvements,
there's
like
Windows
support.
D
There
is
more
tests
coming
more
features
coming
a
new
streaming
API,
which
I
think
a
lot
of
people
will
love
because
they
can
now
pick
to
use
the
streaming
library
today,
love
if
it's
readable
streams
or
full
streams,
or
even
not
using
screaming
labor
at
all.
So
that
was
like
a
huge
paper,
a
huge
pain
point
for
some
of
our
users
and
I
do
hope.
It's
not
anymore.
There's
like
progress
for
like
there's
a
bunch
of
stuff,
so
yeah
I
invite
everyone
to
get
clone.
D
The
repo
like
check
out
master
npm,
install
an
NPM
link
to
any
of
your
projects
and
just
like,
if
you
could
run
your
test
suite
against
your
project
with
the
latest
recipe
fest
and
tell
me,
there's
a
couple
breaking
changes.
What
one
is
win
the
streaming
API
that
this
is
not
commented,
the
other
one
is
with
the
pub/sub
message.
We
basically
don't
just
respect
the
proper
formats
and
it's
a
change
from
topic
CID
to
topic
ID,
and
it
doesn't
augment
it
on
the
issue.
D
But
if
you
find
more,
if
you
find
more
things
that,
like
I'm,
not
working
for
you
that
we're
working
with
the
fewest
release,
please
do
let
me
know
we
want
to
make
sure
that
it
is
a
smooth
transition,
that
everyone
understands,
what
it's
changing,
why
this
changing
and
that
everyone
has
the
full
context
and
yeah.
So
we
are
in
order
to
really
do
the
release.
D
We
have
just
like
one
PRA
left,
which
is
going
to
add
some
inter
probability,
tests
for
pub/sub
to
avoid
having
breaking
changes
in
the
future,
as
we
have
for
other
things
like
this
whap
and
so
on,
and
once
that
PR
lens,
then
we
will
do
the
normal
cycle
of
just
announcing
on
reddit
on
Twitter
and
like
telling
everyone
on
IRC,
saying,
hey
like
in
two
days,
you're
going
to
release
this
thing.
If
you
find
any
problem,
please
tell
us
now.
D
A
I
have
a
question
with
the
progress
bars
and
in
chance.
Those
progress
bars
would
also
make
it
possible
to
have
a
progress
bar
when
you're
using
a
pinning
service.
Yeah
yeah
so
like,
is
append
yet
is
something
that
I
would
love
to
have
a
progress
bar
before
so.
D
D
It's
all
progress
bar
for
transfers.
It
is
it
works
with
just
because
API,
so
you
can
use
it
against
go
it's
the
same
thing
at
go
uses
to
give
you
the
progress
bar
when
you
do
an
IPS
ad,
and
so
we
have
now
that
in
Jessica's
we
haven't
talked
about
having
that
feature
to
pinning
but
I
don't
see.
Why
not
like
it
is
something
that
I
yeah
definitely
saw
or
participated
in
a
couple
conversations
that
people
wanted
to
know
if
things
were
pinned
or
not,
so
we
should
consider
it
symmetry.
F
G
Yeah
quick
question:
we
in
Ingo
ipfs
we
just
change
or
any
change
that
when
you
subscribe,
the
API
first
serves
you
an
empty
pops
up
message
to
make
the
HTTP
connection
flush
and
we're
currently
working
on
removing
this
again,
because
the
flushing
now
works
properly
without
sending
that
stupid
message
and
do
you
yeah
just
a
heads
up?
Maybe
you
need
to
like
also
do
this
in
J's
ipfs
to
keep
compatibility
so.
D
Yeah
I
saw
thank
you
for
taking
me
on
that
issue.
I
saw
the
comments
and,
to
be
honest,
like
I,
never
knew
that
that
was
happening
on
go
FS,
so
I
actually
don't
know
why
it
worked
flawlessly
in
just
like
the
FS
API.
With
the
same
test
like
we
didn't
make
any
change
so
yeah
I
need
to
test
see
if
there
is
actually
any
difference
for
the
API
client.
D
My
guess
is
that,
like
since
you're
sending
just
like
a
new
message,
it
is
trying
to
to
get
the
parameter
like
from
message
it
and
so
on
and
like
since
it
doesn't
have
the
properties
they're
just
like
ignores
it
so
so
yeah.
Let
me
try
to
understand
better
why
it
was
never
a
problem
for
us
and
get
back
to
you
all.
D
More
note
on
the
pinning
progress
bar
Matt
and
everyone
like,
so
we
have
this
repo
interface
IP
FS
core.
It
has
the
interface
name
on
it,
because
it's
the
interface
that
we
create
a
spec
for
it
and
create
tests
to
make
sure
that
implementations
are
compliant
with,
and
so
we
have
been
using
it
to
communicate
a
KPI
changes
or
API
upgrades.
It
is
a
good
way
to
get
examples
for
our
API
work,
but
also
it's
a
good
way
to
propose
new
additions
right.
D
So
if
you
want
a
progress
bar
for
the
pinging
service
and
I
can
issue
myself
as
well,
not
like
that
I
know
it's
important,
but
like
anything
else,
that
comes
to
mind,
definitely
open
an
issue
there
like.
If
you
know
that
someone
is
trying
to
do
something
and
they
cannot
because
it's
missing
something
definitely
encourage
them.
Also
to
add
that
question,
because
it
helps
us
a
lot
to
understand
how
people
are
using
these
api's
for.
A
D
It
is
just
like
a
starting
point
from
a
starting
point
on
github
that
follows
many
conversations
that
we
have
amongst
ourselves:
either
crucem
calls
or
in
person
or
true
IRC,
and
also
by
like
following
what
a
lot
of
people
in
the
GS
community
are
now
working
on.
Essentially
like
the
ipfs
ecosystem
is
growing
a
lot.
We
have
a
lot
of
pieces,
we
have
a
lot
of
modules.
D
Every
single
module
hasn't
read
me
every
single
module
as
attest
to
it,
every
single
module
as
its
own
API
and
the
more
your
system
grows
the
harder
it
becomes
to
have
the
complete
context
of
how
that
module
fits
with
all
the
other
pieces
and
because
most
of
that
context
is
actually
right.
Now,
in
maybe
a
couple
three
four
people
tops
inside
the
profess
community.
We
become
extremely
dependent
on
them
like
there
is
a
lot
of
things
that,
for
example,
me
or
Jamie.
D
What
are
the
biggest
pain
points
and
try
to
come
up
with
a
solution
together
with
a
framework
where
we
can
like
create
a
standard
of
like
this
golden
standard
for
our
modules
that
will
enable
anyone
to
take
responsibility
over
a
piece
of
the
puzzle
that
will
enable
anyone
to
I.
Do
a
patch
really
do
a
minor
release
without
being
afraid
of
like
breaking
in
this
way.
Like
the
let's
say,
the
top-level
maintainer
is
the
people
that,
like
pulled
the
glue
on
their
heads
about
all
the
pieces,
are
pieced
together.
D
Just
need
to
be
present
when
the
discussion
really
requires
react,
attack
showing
some
large
piece
of
the
puzzle
and
not
just
maintaining
a
tiny
pieces.
There
are
things
like
somatic,
really
things
like
just
having
100
percent
coverage
test
coverage
so
that,
like
a
you
know,
every
time
someone
submits
a
patch,
they
just
have
to
do
a
little
bit
of
effort
to
make
sure
that
everything
still
stays
in
place
and
that's
like
the
attach
not
going
to
break
the
code
somewhere
else,
there's
things
that
we
can
do
in
CI
and
like
linting
and
so
on.
D
Some
of
the
things
we
already
do,
but
they
are
not
very
explicit.
It's
more
ok.
We
just
have
it
because
we
felt
but
I
feel
like.
If
we
have
this
discussion,
we
can
then
convert
that
to
something
very
useful
that
goes
into
the
contributing
guidelines.
That's
everyone
in
the
community
can
use
to
help
maintain
all
these
all
these
modules
we
create
so
yeah.
The
issue
is
linked.
Please
go
check
it
out
and
yeah.
D
I
K
It
is
whether
it
be
a
CSV
file
or
a
Excel
document
or
txt
file.
So
we're
also
very
interested
in
having
this
be
accessible
and
really
palatable
to
regular
normal
people
in
the
real
world
who
don't
have
command-line
knowledge,
so
we
should
be
electronic,
which
is
this
guy,
and
this
thing
as
it's
running
is
actually
running
an
IP
FS
node
under
the
hood,
I
have
a
whole
thing
and,
as
you
can
see,
we
have
a
number
of
options
here.
Basically,
we
have
a
list
of
data
sets.
These
result.
Data
sets
on
my
machine.
K
D
K
Look,
this
beast
hyssop
that
she
has
there's
no
description
or
anything
and
I
meet.
You
I
can
hit
ad,
oh
and
it
didn't
work,
but
when
it
works,
it's
great!
Oh
it's
because
I
already
had
this
cassette
named
chickadee
anyways
from
that
era,
I
can
go
over
to
actually
querying
from
data.
I
can
do
select,
name
appearances.
K
Comics,
and
so
this
is
a
comic
status
that
I
already
have
sort
of
running
locally,
and
what
this
is
gonna
do
is.
This
is
actually
gonna
go
grab
all
of
that
data
from
it's
gonna
parse,
it
run
it,
and
the
output
of
this
is
also
a
dataset
which
is
now
a
candidate
for
being
on
ipfs
I
can
then
chart
it
really
quickly.
K
Cool
most
popular
parents
was
Bruce
wayne/batman
from
this
comic
desk
list
and,
more
importantly,
we've.
The
white
paper
sort
of
goes
into
this
in
great
detail,
but
every
query
we
go
to
great
lengths
to
try
and
make
sure
that
every
single
query
collides
as
much
as
possible
where
the
hashes
will
match
so
rerunning.
From
this
query,
the
second
time
actually
does
not
generate
new
data.
We
have
checked.
We
checked
that
query
as
it's
been
inputted.
We
resolved
that
to
a
hash.
K
We
put
that
hash
on
ipfs
and
we
check
to
see
if
that
hash
already
exists
and
points
to
a
data
set,
and
if
it
does,
then
we
can
just
turn
you
back
to
results.
So
if
you
can
imagine
if
this
was
a
calculation,
it
took
like
24
hours
to
do,
while
now
we
can
just
sort
of
stream
and
answer
instead
of
running
the
calculation
again.
We
also
can
naturally
collide
along
a
number
of
other
dimensions,
including
the
structure
of
the
data
we
also
generate.
K
You
know
everything
that
you
would
expect
from
did
set
that
is
living
in
a
hash
based
or
histories
of
data
sets.
We
can
actually
track
every
change
over
time.
We
have
an
associative
metadata
model
if
you
can
see,
and
all
of
these
things
are
actually
just
straight-up,
ipfs
objects
living
on
the
network
and
so
yeah
as
we
see
we
just
pack.
Everything
in
this
is
the
actual
raw
data
that
we
create
against.
This
is
the
data
set
definition.
K
This
is
an
abstract
query,
which
is
we've
taken
the
query,
all
the
semantic
information
about
the
query.
We
have
worked
it
out
and
then
we've
ordered
the
keys
by
automatically,
and
this
hash
of
this
object
is
intended
to
collide
with
anything
else
that
shares
this
structure,
which
will
then
show
interoperability
between
those
datasets
same
thing.
With
the
query
itself,
we
actually
write.
K
This
is
the
hash
of
the
Akshat
query,
which
is
the
query
written
into
a
generalized
form,
and
then
this
we
refer
to
it
as
a
comics,
and
this
is
the
hash
of
that
dataset
itself,
and
this
was
the
syntax
SQL
query,
yeah,
and
so
we've
lost
a
pretty
interested
in
like
being
able
to
make
it
easy
to
add
and
removed
it
at
ipfs.
So
our
addition
system
is
pretty
straightforward.
If
I
can
find
the
button
there,
we
go
simply
just
dropping
a
CSV
file
in
here
and
we
can
infer
the
structure
of
that
CSV
file.
K
Checking
on
ipfs
for
you
distributed
yeah.
It's
kind
of
the
whirlwind
tour
of
the
whole
thing,
but
I
looked
it.
They
gave
more
time
for
questions.
If
anybody
has
them,
there's
a
whole
bunch
of
stuff
that
this
sort
of
points
to
we're
really
excited
about
IP
LD.
We're
really
excited
about
semantic
chunking
so
that
we
can
break
up
CSV
files
along
things
along
proper
rows,
which
would
then
open
up
the
realm
of
distributed
computation
distributed
querying,
which
would
be
an
exciting
notion,
but
generally
I.
K
Think
that
you
know
in
our
time
working
at
the
data
rescue
movement
and
trying
to
understand
what
our
hives
look
like
on
the
distributed
web.
We
think
that
this
permanent
link
data
structure
is
or
is
a
magnitude
more
interesting
than
anything
else.
You'd
see
in
the
world
of
link
data,
we're
pretty
obsessed
with
making
this
frictionless
as
possible.
It's
all
completely
open-source
together
github
repo
that
it's
linked
from
that
website
and
that's
kind
of
where
we're
at
any
questions.
D
Yeah,
so
this
is
pretty
cool,
and
you
already
like
answer
it
two
of
my
questions
now,
so
thank
you.
So
much
for
the
demo
and
yes,
super
exciting,
like
I,
saw
that
you
are
having
the
the
query
itself
to
an
IP
FS
object
that
mean
that,
like
any
of
those
electron
nodes,
can
fetch
the
query
from
where
that's
working
like
run
it
locally
to
prove
that
the
bid
transformation
is
the
same.
Absolutely.
K
Not
only
that
we
can,
you
can
use
that
for
Davison
exchange,
our
version
of
demo
for
some
reason
isn't
working
I,
think
it's
imposing
namespace
collision,
but
yeah
you
can
actually.
The
whole
point
is
to
be
able
to
search
and
get
data
from
other
peers
without
any
interruption,
and
so
we're
using
the
p2p
for
that
when
we
basically
just
add
one
protocol
layer
on
top
of
the
IP
FSM,
it's.
D
Cool,
so,
ideally,
you
should
be
able
to
create
a
pipeline
right,
so
you
have
the
original
that
I
said,
like
we
kind
of
like
use
these
transformations
language,
you
not
kill
deal
and
like
we
have
the
the
query,
which
is
a
transformation,
and
it
gives
you
a
new
data
set
and
you
notice
that
doesn't
even
have
to
exist
until
you
actually
need,
like
all
the
transformations
to
be
applying
yeah
yeah.
So
normally.
K
That
takes
me
like
20
minutes
to
get
to
that
point,
but
yeah,
that's.
The
hope
is
to
be
able
to
as
data
changes
whether
it
meets
you
you,
you
sense,
a
change
to
tip
automatically
rerun
a
number
of
queries
that
are
joining
tables
together,
and
so
you
can
sort
of
do
hey.
You
know
your
stock
market
data
came
out
and
let's
calculate
the
GDP
of
Africa
and
that's
sort
of
what
we're
hoping
for
awesome.
D
K
I
mean
more
than
anything,
we've
just
been
working
our
butts
off
to
try
and
get
this
thing
of
function
now
and
then
really
hoping
to
turn
around
and
do
the
IP,
LTI
integration
and
then
work
on.
Take
a
good
look
at
chunker's
support
a
couple
of
new
file
formats.
That's
really
what
we're
excited
about.
We
have
two.
The
biggest
thing
we
really
need
to
get
into
place
into
query
is
some
sort
of
either
DHT
or
C
or
allow
coordination
of
who
has
what
datasets?
D
K
Yeah
we're
very
much
looking
forward
to
sort
of
contributing
in
both
directions.
If
we
can
sort
of
help,
I
PFS
groan,
we've
intentionally
picked
an
electron
app
because
we're
planning
on
running
auto-update,
which
will
automatically
keep
everybody
running
the
latest
version
of
IP
FSM,
but
and
so
we're
hoping
that
as
users
grab
this
stuff
and
scale.
This
thing
we
can
hopefully
keep
everybody
running
the
latest
versions
of
of
ipfs
under
that
suite.
D
Yeah,
like
super
interested
in
like
like
seeing
this
develop
and
like
helping
you
succeed,
and
if
you
want
to
have
more
conversations
about
IP
LD
and
now
you
can
transform
it
CSV
into
something
else
and
then
how
to
use
like
built-in
transformations
or
essentially
help
us
figure
out
how
to
describe
what
a
liability
transmission
is.
So
that
is
usable
for
your
use
case
and
like
we
can
then
explain
to
other
use
cases
right
now.
K
Be
super
exciting,
my
hope
actually
is
the
turnaround
benefits
ability
is
to
be
able
to
run
SQL
queries
on
the
IP
LD
graph
itself,
which
I
think
we
can
actually
do
given
the
way
that
we've
architected
our
side
of
the
things
we
need.
We
need.
We
need
to
do
a
long,
deep
dive
on
IPL
to
you,
I'm
very
much
looking
forward
to,
but
yeah
super
excited
yeah
cool.
Thank
you.
Thank
you.
Anybody
else
any
questions.
A
L
L
If
you
know
this,
that's
good
now
yeah
through
three
three
terminal
screens:
okay
cool.
So
this
is
a
combination
of
using
docker
compose
and
then
a
binary
route
and
go
so
in
the
directory.
There
is
docker.
You
can
start
that
with
docker
compose
as
I've
already
done
up
here,
and
then
you
can
start
it
with
the
electric
start,
and
this
starts
a
daemon
that
you
can
communicate
with
below
here
on
this
bottom
terminal.
L
You
can
add
different
daemon
GI
running
so
I
already
have
my
ipfs
daemon
running
here
and
I
have
a
couple
more
running
in
interplanetary
desk
that
from
Y.
So
if
I
start
this
up,
you
can
see
that
it's
going
to
start
writing
the
event
metrics
that
come
from
an
idea
of
a
stand-in
to
influx
DB
that
we
can
then
view
in
graph
on
ax.
So
if
we
want,
we
can
add
a
couple
more
down
here
and
right
now
we
just
add
them
based
on
their.
L
So
right
now,
I
got
it
four
nodes
to
the
collection,
and
then
you
can
list
them
and
you
can
see
what
notes
you
have.
You
can
also
specify
to
add
tags
of
your
own
to
the
event
log,
so
in
this
case
I've
Auto
added.
You
know
that
you
tagged
to
all
of
the
event
logs
that
come
from
each
node.
So
when
you
hop
over
here,
you
can
view
them
in
Ravana
and
I
need
to
work
on
adding
more
event
logs
make
this
a
little
bit
more
useful.
L
But
this
is
essentially
a
breakdown
of
the
performance
of
different
DHT
calls
and
their
durations.
You
can
see
down
below
here
it's
a
little
bit
tiny,
but
the
node
ID
is
at
at
each
of
the
events,
and
then
we
can
look
at
these
events
in
isolation.
If
we
want
hasn't
been
running
for
a
long
time
right
now,
so
there's
not
going
to
be
a
lot
of
data
here.
L
L
It's
very
crude
right
now
and
I've
done
it
building
this
out
a
little
bit
more
this
week,
but
I'm
looking
for
suggestions
on
what
could
make
this
more
useful
for
everyone.
I
know
that
I
gave
us
cluster.
Could
maybe
you
get
some
use
out
of
this?
Seen
us
there's
a
lot
of
notes
that
they
want
to
click
metrics
on,
but
yeah
I
can
leave
this
open
for
questions.
If
anyone
has
any
or
suggestions
on
what
can
make
this
more
useful.
D
L
The
way
it
works
right
now
is
I,
just
listen
on
the
API
endpoint,
which
is
like
slash
API,
v0k
log.
So
if
jsap
FSU's
does
that,
then
it
should
be
able
to
work
with
that
and
I'm
not
completely
sure.
This
is
the
correct
direction
to
go
in
I.
Think
it's
a
it's
been
easier
to
add
and
remove
notes.
That
was
my
goal
is
to
make
it
just
one
or
two
commands
so
that
you
could
spend
metrics
and
take
them
down
without
editing
configuration
files.
L
Elk
is
another
thing
I've
looked
at,
but
that
requires
adding
and
IP
of
us
node
to
the
log
stash
telling
it
to
look
at
the
endpoint,
restarting
log
stash,
and
it's
just
a
little
bit
more
clunky
and
I
wanted
to
make
this
smooth.
But,
yes,
the
plan
is
for
this
to
work
with
all
implementations
of
ipfs.
They
use
the
same
HTTP
API.
D
And
yeah,
it
will
help
me
to
know
which
allows
I
have
to
give
you.
If
you
tell
me
like
what
Caiaphas
is
doing,
but
if
you
have
any
kind
of
like
way
of
describing
where
you
want
the
log
slapping
like
before
and
after
calls
etc,
then
I
can
give
you
the
exact
same
move
out
so
that
you
can
measure
the
other
challenge
there
or,
like
probably,
a
more
different
challenge,
is
doing
the
same
for
browser
notes
so
spawning
Firefox,
nodes,
criminal,
Safari
and
so
on
and
getting
the
same
type
of
information.
D
L
A
A
H
H
B
B
B
All
right
there,
if
you
run
merkel,
share
with
merge
like
that
like
so
it
will
just
tell
you
to
run
the
right
BFS
Damon,
because
right
now
the
Python
implementation
of
ipfs
is
not
ready,
but
I
hope
it
will
change
in
the
future.
Personally
I'm
currently
participating
in
IPL
d-dog
for
python,
but
yeah.
Let's
do
some
uploads
and
show
you
how
it
works.
So
the
most
simple
type
of
upload
you
can
do
is
to
just
pipe
it
anything.
You
like
tumor,
tumor,.
B
B
B
What
it
does
is
it
lets
you
use,
merge
to
decrypt
it
because
the
secret
is
present
only
this
link
and
not
anywhere
else,
which
gives
you
some
safety
to
just
you
know,
grab
a
link
and
send
it
to
a
friend
also
who's,
also
get
merkel
sure
I've
got
there's
also
the
use
case
where
they
don't
have
merkel
share
or
ipfs
altogether,
but
we're
gonna
go
to
that
in
a
moment.
Okay,
here
we
see
the
decrypted
content.
B
If
we
try
to
if
we
try
to
visit
the
link
without
without
they
the
the
key,
you
will
see
just
garbage
like
like
this
and
that's
what
I
had
in
mind.
It
also
kind
of
breaks.
I
PF
s
as
determinism
of
hashes,
but
on
the
other
hand,
it
gives
you
the
extra
privacy
of
of
your
content
not
being
recognizable
on
the
network.
Okay,.
N
B
B
This
is
super
secret,
stuff
awesome,
so
yeah
you
get
this
link.
The
only
difference
here
is
that
it's
got
this
little
web
UI
called
prefix.
It
just
lets
me
know
that
this
link
is
meant
for
jus
for
the
web
UI
and
not
for
the
console
so
that
it
doesn't
fail
when
trying
to
decrypt
against
I
can
show
you
what
it
does
and
it's
received
such
a
link,
yeah
web
UI
download
for
command
line
is
not
supported
yet
Oh
a
typo
here,
but
okay,
let's
go.
B
Let's
go
see
what
the
GUI
looks
like
okay,
yeah
here
it
is
well
there's
nothing
not
much
really,
but
Mirko
sure
is
not
meant
to
to
be
complex
really,
and
it's
meant
to
serve
ordinary
people
who
just
want
to
get
the
taste
of
the
distributed
web.
So
I
suppose
it's
it's
rational
to
do
to
have
it
built
that
way.
There's
an
IP,
FS
link
and
then
a
little
forum
github
ribbon
the
ipfs
website,
as
seen
here,
I
can
copy
to
clipboard.
It
shows
copied
for
two
seconds.
Then
it
can
just
go
it
by
sits
in.
B
You
can
see
that
there's
this.
This
is
super
secret
stuff
written
here.
All
right
is
there.
Maybe
anything
else:
okay,
yeah
right
if
I
cover
that
so
I'm
there's,
like
maybe
three
plant
features,
I've
got
that
is
built
in
clip
or
supports
so
that
the
linkage
choose
will
get
copied
to
the
cube
copied
to
the
clipboard
right
away
without
using
anything
like
X
clip
or
opening
with
xdg
open.
It
might
just
come
in
handy
I.
B
Think
then,
when
pure
Python
ipfs
implementation
is
ready,
I
would
like
to
use
it
to
support
marco
share
so
that
people
don't
have
to
run
the
ipfs
daemon
and
asked
so
there's
this
little
bag.
Where
I
can
I'm
not
able
to
put
non
utf8
non
eunuch
aid
non
Unicode
content
on
the
web,
UI
I
would
tend
to
it
in
in
in
the
future
having
it
it's
like
the
highest
priority
for
me
like
now,
but.
B
A
B
Yeah
so
I'm,
when
you
visit
my
repo
repository
I,
got
this
little
motto
which
says:
like
you
know,
it's
like
it
said,
distributed
ipfs
based
bean,
which
is
similar
to
scrunch
dot,
us
sperm.
So
to
us,
it's
like
a
paste
bin
that
is
accessible
from
curl,
and
you
know
I
really
like
to
want
it
to
mimic
that
in
the
way
that
you
can
basically
pipe
anything
and
once
you
like
curl,
maybe
the
the
Gateway
you
can
receive
the
content
without
any
wrappers
I
mean
the
web.
B
A
B
A
N
N
N
Currently
it
can
display
IP
all
the
kids
trees
and
some
more
stuff.
So
you
get
you
push
this
with
the
kids
I
peel.
The
remote
cover
a
work
area,
I
ain't.
If
you
just
put
the
comic
hush
up
here,
you
get
a
repo
which
looks
kind
of
familiar,
probably
to
some
other
service
and
yeah
you
get
the
tree.
And
can
you
treat
me
you
can
be
far?
N
You
can
be
fires,
it
works.
You
can
go
into
the
trees
and
view
another
files
deeper
yeah,
you
can
you
commit
lists,
you
can
be
more
comets,
you
can
become
it's.
You
shows
that
deep,
commit
and
yeah.
It
works
on
Jays
IP
of
us
using
the
deck
API.
So
it's
IP,
LD
and
yeah.
My
long-term
goal
for
this
is
to
have
some
more
like
github
functionality
like
issues
for
requests
and
stuff
like
that
yeah
and
that's
pretty
much
it
just
a
little
kids
have
like
think
and
Jays
idea,
house
and
I
building
yay.
N
It's
actually
running
on
the
rogue
its
data
using
IP
OD,
so
there's
no
cuz
database
anywhere.
It's
kinda
painful
for
some
things,
I
mean,
like
probably
hardest
thing
to
implement,
is
going
to
be
good,
blame
which
I
don't
really
imagine
implementing
as
of
now
yeah,
but
the
rest
is
kind
of
kind
of
simple
actually
to
do
so.
A
Since
the
get
I
peeled,
II
stuff
includes
the
light
sort
of
failover
lookup
of
checking
against
github.
Does
that
mean
if
I
was
using
if
I
was
using
a
GIS
I
could
point
it
at
a
git
repo
that
is
currently
on
github
and
I
would
be
able
to
use
all
these
features
to
browse
through
it,
and
it
would
just
be
using
IP
LD
to
resolve
this
stuff
over
IP
FS
in
that
field
where's
the
shine
is
it
I'm.
M
A
D
I
remember
a
project
that
in
Caswell
did
like
maybe
five
years
ago
before
our
I
things
like
service
workers,
all
right,
defense
and
many
other
things
were
around
and
it
was
the
full
key
protocol
implemented
in
JavaScript
so
that
it
could
work
as
a
Chrome
application.
And
so
you
could
have
an
editor
like
edit
code
on
the
browser
and
like
just
push,
commits
the
right
way
and
so
on,
and
so
with
this
like.
Basically,
what
you're
saying
is
just
now
it's
more
than
just
like
a
chrome
app.
D
You
can
actually
have
a
chrome
just
a
regular
web
page
fetching
data
from
get
up
through
the
HTTP
link,
manipulated
share
with
your
friends,
maybe
even
like
work
on
code
collaboratively,
because
you're
gonna
put
something
charities
and
I
can't.
Let
me
edit
ER
for
everyone
and
then,
when
you're
ready
you
just
like
push
it
back
to
somewhere
can
be
ifs,
can
be
keyed
up
again,
can
be
any
other
remote
super
goal.
D
Yeah
yeah,
yeah
and
yeah
I
wonder
if
it
plays
nicely
with
a
cache
API.
So
now
there
is
the
KPI
in
browsers.
It
is
accessible
to
service
workers
and,
like
even
the
normal,
regular
page,
the
window
object
and
it
it
is
very
good
at
catching.
Like
requests
like.
So
it's
very
good
that,
like
just
in
their
sending
that
you
are
requesting
the
same
thing
and
just
giving
you
the
response
back,
so
that
the
browser
then
does
the
job
of
building
the
rest
of
the
webpage.
D
N
G
A
quite
another
question
on
the
good
thing:
when
I,
when
I
push
to
the
IP
LD
remote,
do
I
get
a
new
way,
I
get
a
new
IP
LD
hash,
or
how
does
it
work?
Do
I
get
a
new
IP
LDH
every
time,
I
push
that
size
or
commit?
Even
it's?
No,
that's
my
question.
Yeah.
N
G
M
A
B
Well,
I
just
wanted
to
mention
well
small
thing
and
more
of
a
question
really
I
wanted
to
ask:
have
you
guys
heard
of
Althea?
Oh.
B
H
B
Yeah
well,
I
think
I'll
be
getting
in
touch
with
the
people
who
are
implementing
it
and
I'll.
Try
to
I
try
to
I
try
to
do
it
so
that
IBM
s
support
comes
to
Althea
as
soon
as
possible
and
I
wanted
just
to
give
the
shout
a
shout
out
to
the
project
so
that
maybe
few
more
people
could
get
interested
and
I'll
bring
it
bring
it
up
and
that's
all
pretty
much
really.
A
A
B
So
here's
some
community
and
then
that's
and
then
a
link
to
the
whole
thing.