►
From YouTube: EGDI - 2020-04-13 IPFS Weekly Call 🙌📞
Description
This week has a presentation from the Environmental Data & Governance Initiative (EGDI): Saving Data to the DWeb: A Primer and Practical Perspective
Slides: https://docs.google.com/presentation/d/1yJEKH2BQRp5SnuGYNf7R9n7pobfMHF78tsfHJK6LFxI/edit#slide=id.g6e40c5dcaa_0_161
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
B
B
B
B
One
of
the
things
that
I
really
like
to
do
when
I
give
a
talk
like
this
is
to
kind
of
ground
the
ground
in
theory
ground
in
some
ideas
that
people
have
had
a
while
ago
and
I
when
I
was
putting
this
talk
together,
I
came
across
or
irie
came
across
that
old
kind
of
can
berners-lee
rant
about
cool
urs.
Don't
change,
not
sure
if
you
come
across
that
before,
but
it's
since
1998's
a
I'm
going
to
quote
it
fairly
extensively
throughout
this.
B
But
it's
just
this
argument
that
like,
if
you
want
people
to
find
stuff,
you
have
to
be
a
really
thoughtful
host
and
I
like
this
framing,
and
it
turns
out
the
Tim
berners-lee
in
fact
is
working
on
the
decentralized
web
now
because
it
really
does
follow
through
on
a
lot
of
the
stuff
that
he
was
right
about
in
1998,
one
of
the
really
crucial
pieces
of
URI
versus
URL
into
a
cool,
your
eyes.
Don't
change,
and
you
all
are
probably
pretty
familiar
with.
B
This
is
URI
as
an
indicator
versus
URL
is
a
locator
and
that's
going
to
be
really
important
for
decentralized
web,
because
when
you
look
at
a
URL
you're
saying
I'm
asking
with
this
protocol
I'm
going
to
ask
this
entity
at
freehold,
etiquette,
books,
I/o
and
then
I'm
going
to
look
at
this
particular
location,
which
is
a
very
different
way
of
asking
them.
Saying:
hey
I'm,
going
to
ask
on
this
protocol
and
then
I'm
going
to
tell
you
what
it
is.
B
B
So
I
think
this
is
true.
I
think
this
really
holds
up
in
a
lot
of
cases
that
doesn't
matter
a
whole
lot
that
people
are
going
to
lose
confidence.
Confidence
in
the
owner
of
the
server
right,
like
I,
would
say
that
most
of
the
stuff
on
the
Internet
is
not
that
important.
The
domain
is
not
somebody.
You
particularly
need
to
trust
if
it
disappears.
You
like
okay,
I'll,
find
it
somewhere
else,
but
that
is
not
always
the
case.
B
B
It
just
sprung
up
because
there
was
a
lot
of
political
route
in
response
to
that
election
and
what
we
do
well,
what
was
what
we
initially
did
was
we
started
by
taking
a
lot
of
the
government
data
around
environmental,
like
proof
for
climate
change
and
environmental
justice
data,
a
lot
of
people
who
started
the
organization,
we're
really
worried
that
the
Trump
administration
would
come
in
and
either
a
delete.
It
be
defund
the
servers
or
see
make
it
impossible
to
find
or
be,
and
all
of
these
are
mobile.
B
Exorcist
text
box,
not
radio
button
B
is
defund
continued
research
at
these
area,
areas
of
research,
which
is
all
very
dangerous
to
our
ability
to
understand
and
act
on
real
environmental
issues
and
real
environmental
justice
issues.
There's
a
lot
of
environmental
problems
like
oil
and
gas
wells.
Releasing
benzene,
for
example,
tends
to
disproportionately
affect
folks
who
are
already
marginalized
and
can't
can't
afford
to
be
somewhere
where
they
can
breathe
cleaner
air.
B
B
If
the
government
there's
a
domain
that
you
need
to
trust,
it
matters
a
lot
how
they
handle
data
matters.
A
lot
have
a
have
a
move
data
around,
and
here
is
a
thing
that
happened.
So
this
is
a
climate.
Do
t
gov
and
you
may
ask
yourself:
why
does
the
Department
of
Transportation
have
a
climate
change
website
seems
like
that?
Maybe
would
be
more
of
a
Noah
thing
to
do
or
a
different
or
a
PA
or
to
the
organization,
but
it
turns
out
that
transportation
is
responsible
for
I.
B
B
There
are
other
solutions
to
this
there's
a
page
in
the
way
back.
I
have
the
browser
extension
for
the
Wayback
Machine
by
the
Internet
Archive,
which
is
great
for
this
sort
of
thing,
because
at
least
you
can
find
the
old
version
of
it
which
exists
and
is
hosted
somewhere
by
the
way
I
do
think
they
did
take
it
down
entirely.
B
If
you
rely
on
a.gov
page
for
certain
information,
but
the
website
gets
taken
down,
how
do
you
know
to
look
for
an
archive
is
plenty
of
maybe
that
most
people
would
kind
of
stop
there.
They'd
say:
oh
I,
don't
know.
Maybe
somebody
gave
me
the
wrong
link.
Maybe
this
thing
doesn't
exist
anymore
and
then
there's
this
other
problem,
which
is
I,
mean
we
probably
trust
the
Internet
Archive
I
trust
the
Internet
Archive.
B
On
the
other
hand,
if
you
were
to
find
an
archive
of
a.gov
website-
and
it
was
about
something
that
people
were
tensions
about,
could
you
be
a
hundred
percent
certain
that
it
was
what
was
originally
there?
How
could
you
prove
it?
How
could
you
know
it
and
what's
more,
is
the
archive
any
safer
than
the
original?
B
B
B
Here's
what's
in
our
file
that
we
want
to
put
up.
The
first
thing
we
need
to
do
is
turn
these
into
standardized
chunks,
so
in
ipfs
I
believe
that
the
max
file
size
is
256
or
the
micro
sacrifices
Janet
reduces
kilobyte.
This
is
settable.
It
might
be
different
for
dat,
for
example,
what
that
means
is
that
each
of
these
things
become
a
chunk,
so
something
that
is
smaller
than
256
is
going
to
go.
Just
like
putting
pieces
of
paper
in
a
really
big
file
box.
B
You
just
have
a
couple
pieces
of
paper
in
this
box
and
then
these
two
are
both
full
of
the
same
CSV
file.
You
end
up
with
four
chunks
from
these
three
files
in
order
to
have
a
standard,
sighs
and
I'm,
assuming
and
I'm
curious
to
hear
from
you
guys
about
this,
but
I'm,
assuming
that
the
number
that
the
major
reason
for
this
is
that
you
need
to
know
is
the
number
of
chunks
that
you
have
will
fit
on
the
hardware
you
have.
B
The
next
thing
we
do
is
cryptographic
hashing,
where
you
basically
are
assigning
a
label
to
your
chunk
and
some
characters
of
hashing
is
that
it
is
one
way
you
cannot
run
the
function
backward
to
get
the
same
input,
and
here
is
a
table
that
is
significantly
less
complicated
than
what
you
would
use.
That
technically
fits
this
one
way,
definition
where,
if
I
were
to
say,
oh,
the
output
is
3.
B
B
We
already
gone
through
this
a
little
bit,
but
it's
good
for
file
verification,
while
identification
and
efficiency.
The
the
most
important
part
is
that
any
given
file
will
encode
differently.
So
if
I
change
something
in
my
original,
for
example,
if
I
had
a
really
important
file
that
had
to
do
with
benzene
releases
in
a
particular
area
that
came
up
for
a
legal
case
and
I
have
the
hash
of
the
Gov
and
everyone
trusted,
but
that
was
the
correct
hash
of
the
Gov
and
then
I
had
an
archived
version
somewhere
else
and
it
matched
the
hash.
B
B
So
here
we
have
all
our
hashes
related
to
our
different
chunks.
What
we're
going
to
do
is
we're
going
to
paste
them
together
into
a
doubly
long
stream,
but
this
is
just
more
information
that
is
hashable.
So
that's
what
we
do
so
then.
This
hash
includes
for
these
two
hashes
put
together,
which
includes
for
these
two
hashes,
which
encode
to
your
actual
data,
and
we
do
it
again
until
there's
just
one
and
then
what's
cool
about
that.
B
One
of
the
things
that's
extremely
cool
about
this
is
that
it's
really
good
for
this
verification
problem
again
so
say:
I'm
not
just
looking
for
one
file
say
I'm
looking
for
an
entire
data
set,
any
change
anywhere
in
any
file
will
result
in
different
hash,
because
there's
that
hash
that
I
got
up
here.
This,
of
course,
is
only
encoded
from
the
hashes
that
came
before
it.
So
if
we
modified
this
one,
it
would
change
what
goes
into
here.
So
that
would
change
what
goes
over
here,
which
changes
which
goes
into
the
final
hash.
B
So
I'm
going
to
go
click
through
accessing
files,
as
I
believe
you
are
generally
familiar
with
this,
but
I
love.
This
quote:
what
you
need
to
do
is
have
the
web
server
look
up
a
persistent
URI
in
an
instant
and
return
the
fire
file,
wherever
in
your
current
crazy
file
system?
Has
it
sort
of
waiting
for
us?
Oh
and
that's
exactly
what
decentralized
web
protocols
typically
will
do
and
on
ipfs
that's
going
to
be
the
top-level
hash
that
we
just
figured.
B
B
B
So,
coming
back
to
our
work,
this
is
us.
This
is
a
case
that
we
did
last
that
Miriam
arch
Steve
March.
We
had
someone
come
to
us
at
the
environmental
data
governance
initiative
and
said:
hey.
We
are
worried
about
a
chemical
fire
in
Deer
Park.
This
is
something
that
we're
going
to
start
a
legal
case
around,
but
we
might
not
start
that
legal
case
until
ten
years
from
now,
because
what's
happening
is
that
there
was
a
huge
fire.
B
There
were
some
chemical
tanks
that
was
been
being
released.
The
response
on
the
ground
was
terrible
because,
basically
they
got
the
wrong
order
of
magnitude
on
what
parts
per
million
was
dangerous,
which
month
that
they
sent
in
firefighters
with
no
breathing
apparatus
they
sent
in.
They
sent
workers
back
to
factories
next
door.
They
sent
full
children
back
to
school
next
door
and
benzene
is
a
really
active
carcinogen
and
it's
very
likely
that
essentially,
everyone
in
the
area
is
going
to
get
cancer.
B
That's
horrible.
What's
worse,
is
that
the
folks
responsible
for
the
fire
and
fighting
it
potentially
are
also
the
folks
who
have
control
over
the
data.
That
would
prove
culpability
and
one
of
the
things
that
happened
during
this
fire,
which
is
the
reason
why
folks
work
why
this
was
brought
to
us
was
that
we
we
saw
that
one
of
the
monitors
that
is
relevant
to
this
we've
actually
taken
down
during
fire.
So
there's
potential
for
foul
play.
B
B
Talking
about
data
risk
matrices
with
archivists
of
different
types
and
data
managers,
and
like
one
of
the
quote
from
it,
is
great:
it's
just
data
we're
considered
to
be
at
risk
unless
they
have
dedicated
plans
to
not
be
at
risk,
and
so
we
developed
a
risk
matrix
for
assessing
different
ways
that
data
is
stored
to
see
whether
it's
see
whether
it's
out
like
to
what
degree
it
is
still
in
danger.
Basically,
all
data
is
in
danger
in
some
way,
but
this
is
the
data
that
we
ended
up
storing
in
traditional
archives,
and
you
can
see.