►
Description
Michael Fischer (DBDAO.xyz) - protocol for managing relational metadata on chain via NFTs for the Decentralized Science community.
Philippe and Thomas (Polyphene.io) - Directed Acyclical Graphs (DAGs) for the Decentralized Compute ecosystem and Project Holium (https://docs.holium.org/)
A
All
right,
hello,
welcome
everyone
to
the
compute
over
data
working
group.
Thank
you
so
much
for
joining
this
is
our
12th
session
together,
Baker's
Dozen
and
we're
fortunate
to
be
joined
by
Michael
Fisher
from
DB,
Dao
and
also
Philippe,
and
the
team
from
polyfiend
Michael
is
going
to
tell
us
a
lot
about
the
work
that
he's
doing
with
his
database
team
project.
A
It's
really
interesting
and
Innovative
in
the
way
that
it
integrates
nfts
and
blockchain
for
scientific
use
cases
and
a
number
of
other
use
cases
all
built
on
ipfs
and
and
related
decentralized
Technologies,
and
then
from
the
polythene
team.
We're
going
to
learn
more
about
the
work
that
they
are
doing
around
directed
acyclical
graphs,
which
has
been
a
big
help
for
the
back
of
y'all
Team.
All
the
research
they're
doing
we're
going
to
be
making
it
publicly
available.
A
B
Thanks
so
much
Wes,
just
to
like,
say,
I'm
out
west
at
in
London
at
DSi,
we
were
both
in
a
Desai
event
and
we
had
a
really
good
chat,
around
data
and
computer
for
data
and
everything's
working
on
our
protocol
labs
and
thanks
for
having
me
here,
yeah
so
I'm,
working
on
DV
Dao,
which
is
a
like
a
database
or
a
Firebase
sort
of
for
the
web.
B
3
audience-
and
you
know
we
can-
we
can
think
of
like
you
know,
there's
a
lot
of
different
web
2
and
web
three
equivalents
like
if
you
need
to
do
a
domain
name.
You
use
DNS.
But
if
you
want
to
do
a
you
know,
web
3
equivalent.
You
have
ens
and
I.
When
I
started
this
project,
I
thought
we
really
need
like
a
database
specifically
for
web
3
applications.
So
that
means
around
data
ownership.
That
means
around
interoperability.
B
That
I
mean
that
means
around
the
tokenomics
for
allowing
people
to
share
data
and
so
I
think
in
a
lot
of
use
cases,
we
sort
of
start
off
at
sort
of
the
very
technical
side
so
but
I
here
I
wanted
to
sort
of
kick
it
off
more
with
like
an
example
of
how
it's
how
it's
used
and
then
then
talk
about
how
it's
built
in
the
backside
back
end.
So
just
before
here's
sort
of
who
I
am
I
did
my
undergrad
at
Stanford
and
then
I
recently
finished.
B
B
I
was
I,
got
a
little
work
in
the
league
in
the
law.
So
I
was
the
head
ta
for
a
California
Supreme
Court
Justice,
around
a
class
on
regulating
Ai
and
from
that
I
wrote
a
book
on
regulating
AI
over
on
the
right
here
and
then
I
also
wrote
another
book
recently
on
formation,
which
is
sort
of
the
the
legal
side
of
starting
a
crypto
company.
B
This
DB
dial
is
being
used
in
a
lot
of
DSi
applications,
so
that
stands
for
like
decentralized
science,
so
herodot
is
using
this,
for
example,
and
the
whole
idea
of
like
DSi
is
to
excuse,
you
know:
accelerate
science
using
open
protocols
and
and
the
blockchain
allow
easier
sharing
of
data
to
allow
for
people
to
fund
projects
more
democratically
to
allow
verification
of
data,
and
so
in
this
use
case
that
I'm
about
to
describe,
we
basically
build
like
a
chat
GPT
bot
to
augment
the
scientific
research
process,
and
the
idea
here
is:
we
want
to
see
how
the
intersection
between
what
Ai
and
and
data
sort
of
looks
like.
B
So
you
know
the
the
typical
research
method
includes.
You
know
you
ask
a
question:
you
develop
a
hypothesis,
you
come
up
with
an
experiment
or
some
methods
that
experiment
generates
some
data,
which
you
then
analyze
and
you
sort
of
write
a
conclusion
on
and
we
wanted
to
see.
Okay,
what
part
of
that
could
be
augmented
with
chat
GPT
or
AI.
B
So
we
first
asked
what
chat
GPT?
What
are
the
main
areas
of
science?
So
it
says
physics,
chemistry,
biology,
earth,
science,
all
these
different
things
and
then
we
say
for
each
of
these
different
fields
come
up
with
an
experiment
that
real
that
that
that
uses
baking,
soda
and
vinegar
in
each
of
these
fields.
B
You
say
for
each
of
these
things
figure
out
a
way
to
test
test
that
hypothesis
to
test
that
hypothesis
for
each
of
those
fields,
and
it
says
you
know,
for
physics
rate,
how
long
the
volcano
erupts
for
for
biology
see
how
much
see
how
the
yeast
and
the
baking
soda
increase
the
changes
things.
So
we
go
and
we
say:
okay,
we
like
number
two,
which
is
around
chemistry
and
we
say,
come
up
with
the
methods
for
chemistry.
B
So
we
say
to
test
the
hypothesis
that
the
acidity
level
of
vinegar
affects
the
chemical
reaction
between
baked
soda
and
vinegar
and
volcano.
You
can
follow
the
following
methodology,
so
it
writes
out
a
whole
methodological
experiment
for
baking,
soda
and
vinegar,
and
then
we
say:
okay
come
up
with
like
an
experimental
protocol
for
this,
so
it
comes
up
with
10
different
things
that
you
need
to
test
for
and
then
we
say,
write
the
code.
B
So
this
is
how
we
start
to
get
into
the
database
side
of
things,
but
this
is
sort
of
just
the
example
of
of
what
DSi
is
and
why
and
how
you
can
conduct
sort
of
experiments
that
use
large
amounts
of
data
and
why
you
need
like
sort
of
an
open
source
database.
B
In
order
to
do
this,
so
we
want
to
be
able
to
replicate
this
experiment
that
AI
has
created
across
hundreds
of
nodes
so
that
people
can
sort
of
validate
this
data,
so
DB
Dao,
which
is
the
project
I'm
speaking
about
we
just
say,
give
it
this
give
it
this.
This
is
the.
If
you
go
to
a
DB
dials
website
you'll
see,
we
give
an
example
schema
for
someone's
name
and
age
as
an
example
database.
So
we
tell
chat
GPT.
B
Here's
an
example:
schema
database
now
take
this
protocol
that
you've
written
here
around
the
types
of
experience.
The
data
you
want
us
to
collect
the
chat.
Gbt
wants
us
to
collect
and
take
this
sort
of
text
and
can
just
give
in
this
schema
here,
convert
it
into
a
schema
for
a
database
that
we
can
send
out
the
hundreds
of
people
to
collect
lots
of
data.
So
once
we
do
that
it
generates
this
Json
object
here,
which
creates
this
like
user
interface,
for
a
database
which
is
on
DB
now.
B
So
we
can
add
a
row
when
we
add
a
row.
We
first
have
to
do
the
scientific
experiment,
so
we
actually
did
the
scientific
experiment
here
of
doing
a
volcano
of
creating
a
small
volcano
and
it
works,
and
we
measure
the
various
characteristics.
B
And
we
fill
out
the
form
that
it
created
with
the
acidity
level,
the
ratio
of
baking
soda.
How
strong
the
eruption
was,
how
long
it
lasted
like
suggestions
for
future
research
stuff
like
that,
and
that
creates
an
nft
Row
in
this
database,
so
that
we,
what
we
did
back
here
is
when
we
filled
out
this
survey.
What
we
actually
did
is
we.
B
We
submitted
a
data
point
to
the
to
DB
dial,
which
is
just
like
a
protocol,
and
DB
Dao
creates
a
an
nft,
and
so
the
person
that
created
this
data
point
is
actually
the
owner
of
that
part,
the
experiment.
B
So
just
for
the
sake
of
Simplicity,
you
can
see
you
can.
This
is
the
data
point
that
is
also
an
nft,
so
because
it's
an
nft,
you
can
purchase
this
data.
This
data
point
on
openc,
and
here
is
that
data
point
on
ipfs,
so
you
have
your
CID
up
here
and
all
the
different
scientific
data
here
and
also
it's
on
arbitrum.
So
you
can
see
the
transaction,
the
hash
on
the
transaction
hash,
and
so
now
we
we
just
just
for
the
sake
of
Simplicity.
B
We
only
use
this
one
data
point
here
and
then
we
asked
chat
GPT
to
you
know
we
exported
as
a
CSV
file,
and
we
asked
to
see
it.
We
asked
chat
GTP
to
write
up
a
research
report
based
on
that
piece
of
data
and
then
it
sort
of
writes
Up,
This
research
report
now,
of
course,
in
the
future,
you
know
instead
of
just
one
person
I'm
doing
it.
B
B
So,
for
example,
you
know:
hair
now
is
using
this
to
to
collect
data
on
different
medications
and
do
different
trials
with
people,
and
the
idea
is
that
you
know
this
is
sort
of
an
opt-in
trial
that
just
people
on
the
internet
wanted
to
contribute
too,
and
you
don't
need
any
it's
permissionless,
but
with
permissionless
data
people,
contributing
permissionlessly
permission,
data
and
also
being
incentivized
with
the
data
they
contribute
this,
and
this
opens
up
a
whole
new
set
of
problems
which
we
will
now
talk
about.
B
Okay,
so
just
like
a
little
bit
going
back
to
what
we
started
with
is,
like
you
know,
for
a
lot
of
things:
there's
like
a
web
2
and
a
web
3
equivalent.
So
you
know,
if
you're
doing,
if
you
need
to
store
a
file,
you
have
S3,
but
you
also
have
ipfs
if
you're
on
web3,
if
you're
hosting
you
use
versel,
but
you
can
also
use
Bleak
and
name
service,
you
have
DNS,
you
have
ens
and
you
know
servers.
You
know
you
could
write
a
small
server
function
on
AWS.
C
B
Or
you
could
write
a
a
smart
contract
and
you
know
the
database
is
like
the
core
part
of
many
types
of
applications,
that's
sort
of
where
the
data
is
stored
and
where
it's
processed-
and
you
know
we
have
a
lot
of
things
like
Firebase,
postgres
and
mongodb.
But
you
know
the
web
3
equivalent
really
hasn't.
Really.
There
hasn't
been
a
strong
web
3
equivalent
and
also
one
that
really
tries
to
embrace.
You
know
crypto
economic
Primitives
to
allow
for
data
collection.
B
B
B
So
what
if
we
were
building
like
a
web
3
database?
What
would
be
some
of
the
things
we
would
definitely
want.
So
the
idea
that
you
like
put
your
data
into
a
database
and
then
that's
the
last
you
hear
of
it
that
is
sort
of
passe
at
this
point.
B
People
when
they
contribute
data
to
a
database,
should
receive
a
reward
for
that
data.
If
that
data
is
useful.
So
right
now
we
think
of
data
as
costing
money
to
store
so
I
I
go
to
S3
I
have
a
photo,
I
put
the
photo
on
S3
and
it
cost
me
a
penny
a
year
to
store
it.
But
what
we
really
should
think
about.
It
is
like
negatively
priced
data,
so
I
put
the
data
on
S3.
B
It
monetizes
in
some
way
through
ads
or
through
subscriptions,
and
then
I
get
a
percentage
of
that
earnings
back,
and
so
the
idea
that
data
should
just
be.
You
know
you
store
the
data
you
and
you
could.
You
could
choose
to
just
you
know,
store
it
and
have
it
private,
but
you
could
also
Choose
You
could
also
choose
to
have
it.
You
know
public
and
then
earn
money
from
that
data.
B
So
the
idea
is
like
you
know,
you
put
your
data
in
a
database
and
if
it's
useful
to
people
you're
going
to
earn
money
back
data
permanence,
so
data
should
sort
of
live
forever
in
a
lot
of
applications,
especially
in
the
scientific
field.
It
would
be
really
nice
if
we
had
original
data
for
a
scientific
paper.
That
happened.
You
know
100
years
ago
or
50
years
ago,
or
you
know
if
we
collect
it
now,
it'd
be
nice.
If
we
had
that
data
in
200
300
years,
because
then
you
could
try
and
replicate
the
experiments.
B
The
problem,
of
course,
is
that
you
know
a
lot
of
databases
when
they're
in
an
individual
lab
the
the
lab
transitions
between
people,
the
server
crashes,
and
then
the
data
is
lost
forever,
and
you
can't
really
reproduce
a
lot
of
the
scientific
data
that
happens.
B
Data
composability
so
right
now
like
a
lot
of
the
databases,
because
they
only
use
a
you
know,
a
small,
a
small
namespace.
You
can't
link
easily
between
two
data
sets.
So
you
know
Tim
berners-lee
is
the
cool
thing
is
the
hyperlink?
Is
you
could
link
between
web
pages
and
right
now?
You
can't
really
easily
link
between
databases
but
given
like
web
3
and
flat
naming
structures
that
ipms
provides.
Is
you
can
now
start
to
link
between
pieces
of
data,
and
you
can
start
to
link
also.
B
You
would
also
want
to
be
able
to
link
between
records
in
a
database
data
ownership.
So
you,
when
you
have
a
row
in
a
database,
that
row
should
be
yours
and
you
should
have
the
ability
to
buy
or
sell
that
row
to
someone
else
at
a
future
date
or
be
able
to
burn
it.
If
you
want
to
destroy
it,
but
the
important
part
is
that
you
should
have
control
over
it.
B
You
don't
want
to
just
lob
your
data
over
the
fence
to
Facebook
or
something
and
then
have
and
then
lose
connection
to
it
like
you
should
still
maintain
some.
You
should
maintain
ownership
and
the
ability
to
do
what
you
want
with
them.
B
The
cool
thing
about
tokenizing
data,
of
course,
is
that
now
your
token
is
your
token
of
data
is
backed
by
IP
and
you
can
use
D5
like
protocols
and
lending
and
staking
instead
of
just
instead
of
just
having
it
as
a
token
like
an
nft
or
something
that
is
backed
by
a
picture.
Your
token
is
now
that
you've,
this
data
token,
is
now
backed
by
the
Json
object
within
your
nft,
and
you
can
now
borrow
or
lend
based
on
the
IP.
That
is
valued.
B
In
this
token,
when
you
have
a
data
privacy,
when
data
privacy
is
another
thing,
we'd
want
in
a
database
where
you
could
be
able
to
encrypt
your
row
and
then
we
need
identity.
B
Where
you
could,
you
know,
tie
the
piece
of
data
that
you
have
to
your
wallet
and
to
the
reputation
that's
built
on
top
of
your
wallet
using
another
services
and
then
interoperability.
So
your
this
database
should
be
able
to
work
together
with
multi-sigs
or
lending
protocols
in
a
way
that
sort
of
a
web
2
database
just
can't
really
easily.
B
So
in.
Like
one
slide,
this
is
kind
of
like
what
we
do
is
DB
Dao
tries
to
incentivize
data
data
curation
for
structured
web3
data,
and
so
it
incentivizes
people
to
bring
together
data
of
similar
ilk
and
of
a
similar
schema
so
that
it
can
later
be
stored
and
queried
at
a
later
date.
So
the
way
we
do
this
is
through
using
we
sort
of
craft
the
database
as
a
Dao
structure.
B
So
how
do
we
incentivize
things?
Is
we
pay
out
the
people
that
curate
the
data
set,
as
well
as
the
people
that
contribute
to
a
data
set?
So
there's
sort
of
two
there's
there's
two
main
people
that
get
paid,
and
that
is
the
the
person
who
own
the
Dow.
B
That
sort
of
quote
unquote
owns
the
database
or
started
the
database,
and
then
the
people
that
contribute
to
it
and
where
did
they
get
paid
from
people
that
are
viewing
the
data
set,
and
so
the
way
it
works
is,
if
you
want
to
view
the
database
or
you
basically
have
you
can
put
ads
next
to
it,
you
can
put
a
subscription
model
next
to
it
like
Spotify
does
or
like
Netflix
does
or
ads
like
Instagram
or
like
Facebook,
does
and
you're,
basically
like
selling
the
data
and
then
that
sale
of
the
data
goes
to
the
people
that
started
the
database
and
then
the
people
that
contributed
to
the
database,
and
so
you
know
it
sort
of
all
tries
to
align
the
incentives
between
between
the
different
parties.
B
And
you
know
this
is
a
drop
in
Replacements.
All
the
data
SQL
queryable.
We
have
SQL
interfaces
and
graphql
interfaces
and
then
just
like
one
quick
thing
on
the
business
thing
is
like
where,
where
is
the
value
here
created
so,
like
you
know
a
lot
of
crypto
protocols,
you
question
like
where
how
come?
Why
is
this?
Why
is
this
useful
and
why
does
it
create
value
and
what
we
do
here?
We
claim
that
you
know
one
individual
piece
of
data
isn't
alone
worth
too
much.
B
If
you
have
one
health
record,
it's
not
super
useful,
but
if
you
compile
you
know
a
thousand
or
ten
thousand
health
records.
B
Have
something
that's
worth
more
than
the
sum
of
their
parts
and
that
more
than
the
sum
of
their
parts
is
basically
the
value
that's
created
and
we
take
that
value
and
then
sort
of
redistribute
it
down
back
to
the
people
that
contributed
to
the
data
set,
so
that's
sort
of
where
the
value
is
created
and
how
it's
shared
between
the
the
people
in
the
doubt.
B
So
we
based
this
whole
project
on
the
ERC
1155
token-
and
you
know
you
put
your
you,
you
here's
your
X-ray
data,
you
put
it
into
the
database,
you,
the
data
is
bought
by
Pfizer
or
some
large
drug
company,
and
that
pays
out
puts
the
money
into
the
database,
and
then
that
pays
out
the
individual
person
who
contributed
to
the
database,
and
then
you
take
that
you
know,
then
you
begin
that
that
Row
in
the
database
has
a
predictable
cash
flow.
B
So
someone
could
then
take
that
that
Row
in
their
database
and
sell
it
for
a
10x,
the
yearly
income
that
is
generated
for
it.
Now.
This
introduces
an
interesting
question
because
I
am
now
in
very
much
incentivized
to
put
as
much
as
many
rows
into
the
database
as
possible,
even
rows
that
might
have
fake
data
or
that
might
be
spam
because
I'm
earning
sort
of
a
percentage
of
the
income
that
comes
from
this
database.
B
So
if
the
database
is
generating
a
hundred
dollars
and
there's
10
people
in
the
database,
if
I
have
one
row
in
the
database,
I
can
get
ten
dollars.
But
if
I
have
a
thousand
rows
in
the
database,
I
will
get
99
because
I'll
have
waited
out.
So
we
don't
want
this.
You
want
to
try
and
protect
against
people
who
are
civil,
attacking
your
database,
so
the
incentive,
of
course
for
getting
data
into
the
database.
Is
you
get
a
percentage
of
the
profits
that
are
generated
by
the
database?
B
Now
we
also
have
to
prevent
people
from
putting
in
bad
content.
So
the
way
we
do
that
is,
we
make
people
stake
a
small
amount
of
money
on
their
row.
So
when
you
put
your
row
into
the
database,
you
have
to
put
up
between
a
penny
and
five
dollars
in
this
is
set
by
the
administrator
of
the
database.
B
And
so,
let's
just
say
it's
a
dollar,
so
I
submit
my
data
to
the
database
and
I
put
it
along
with
a
dollar
that's
submitted,
and
if
the
row
is
accepted
into
the
database,
then
by
the
Dow
who's
controlling
the
database
and
the
Dow
in
this
case
is
a
multi-sig
of
people
and
we'll
talk
more
about
who
who's
the
curator
of
the
database.
But
right
now
the
curator
of
the
database
is
just
sort
of
a
black
box
that
takes
in
a
row
and
either
accepts
or
rejects
it.
B
So
if
the
Dow
accepts
the
row,
then
I
get
my
deposit
back
and
if
the
Dow
rejects
the
row,
then
that
deposit,
slash
and
so
what's
good
about
this
mechanism
is
that
if
I
am
spamming,
the
database,
the
one
dollar
that
I
put
up
if
I'm
putting
good
data
into
the
database
I
know
my
data
is
good.
I
will
put
in
the
one
dollar
I
will
then
get
it
back
in
the
day
and
it
won't
be.
It
cost
me
a
lot
of
money.
B
However,
if
I'm
a
spammer
and
I
put
in
10
000
fake
rows,
now
that's
going
to
cost
me
ten
thousand
dollars
and
all
that
money
is
sort
of
just
going
to
go
into
funding
the
database.
So
I'm
not
going
to
do
that,
because
I'm
going
to
lose
money,
so
I
won't
even
start
to
do
it
and
so
that
the
the
threat
of
this
works
well
to
disincentivize
people
from
spamming
the
database.
B
So
now
we
want
to
talk
a
little
bit
about
the
notes,
have
the
the
how
the
the
dial
works
and
the
Dow
is
actually
a
multi-sig,
and
so
that
means
it
can
be
one
person.
It
can
be
a
group
of
people
who
are
part
of
a
dow
project
and
be
a
jury
of
people.
So
you
could
have
a
million
people.
A
thousand
people
in
your
Dow
and
just
five
of
them
are
sub.
B
A
subset
of
five
of
them
are
always
selected
to
determine
if
the
row
should
be
accepted
or
rejected
from
the
database
and
and
or
it
could
be
a
Dao,
it's
it
could
be
a
Dao,
it
could
be.
You
could
pay
people
to
do
it,
so
you
could
just
have
a
row
accepted.
You
could
paste
some
random
person
on
Mechanical
Turk
a
penny
to
decide
if
this
is
a
good
or
bad
data
or
it
could
be
an
AI.
So
I
think
this
is
one
of
the
most
interesting
use
cases.
B
Do
you
have
Chachi
PT,
be
your
sort
of
custom
spam
filter,
you
train
it
on
10
examples
and
then
it
either
accepts
or
rejects
the
data,
and
you
have
it
then
another
person
who
sort
of
verifies
it,
who
verifies
all
the
the
flagged
pieces
of
data?
B
So
you
can
also
you
want
to
be
able
to
encrypt
data.
So
we
have.
We
use
lit
protocol
to
encrypt
certain
columns
and
what's
good
about
this,
is
you
basically
pay
for
the
decryption
key?
We
you
pay
us
you.
We
give
you
the
decryption
key,
and
then
you
can
get
access
to
different
columns
that
are
columns
in
the
database,
and
this
like
one
example
of
where
this
might
be
useful,
is
say
you
had
like
a
list
of
like
leads
for
sales.
You.
B
Name
and
different
attributes,
like
this
person
lives
in
this
location.
This
is
their
name,
but
you
would
then
encrypted
the
email
column,
and
in
order
to
be
able
to
contact
that
person,
you
would
have
to
pay
the
decryption
you'd
have
to
pay
for
the
decryption
fee,
and
then
you
could
query
on
that
on
that
private
column,
so
I'll
give
you
a
short
demo
of
how
it
works.
So
we
we
have
someone
who's
building
microclimates
on
microclimate
database
on
top
of
it.
So
we
have
this.
B
We
built
a
UI,
it's
very
similar
to
Google
forms,
so
you
put
in
the
database
name
you
put
in
a
small
description
around
the
database
and
then
you
put
in
the
field
that
you
are
looking
to
create.
So
here
we
put
in
the
location
and
it's
a
required
field.
Then
we
go
ahead
and
mint
that
database.
So
here's
the
databases
it
exists
and
we
then
want
to
add
a
row
to
it.
So
we
add
a
row
we'll
just
say:
Berlin
is
a
simple
example
that
mints
that
database.
B
B
Sure
so
that's
the
that's
what
the
database
looks
like,
but
then
we
can
also.
We
can
also
take
a
look
at
the
row
here,
so
so
the
row
each
row
is
an
nft
as
well,
so
there's
the
row
for
the
nft
there's
the
attribute
for
Berlin.
We
can
see
The
Ether
scan
for
it
and
we
can
see
the
ipfs
board
as
well.
So
this
sort
of
all
this
whole
database
lives
on
change,
which
is
cool,
and
so
how
does
the
business
model
sort
of
work
is
the
rewards?
B
So
the
network
takes
10
of
the
of
the
day
of
the
revenue
when
Revenue
comes
in
from
the
from
ads
or
through
subscriptions.
We
take
10
the
database
admin,
the
person
who
started
the
database
takes
30.
This
is
a
variable
reward,
that's
set
by
them
and
but
it's
fixed
payout
for
them,
based
on
the
work
that
they
do
in
curating
the
data.
B
So
it's
in
the
curators
of
a
data
set
need
to
be
paid
and
also
to
advertise
the
data
set
and
to
get
people
excited
about
contributing
data,
so
they
need
to
be
rewarded,
but
this
is
a.
This
is
a
fee.
That's
fixed
and
then
the
rest
of
the
data,
the
rest
of
it
sort
of
goes
out
to
the
scouts,
the
people
that
contributed
the
data.
B
So
if
the,
if
the
database
generates
a
hundred
dollars,
30
goes
to
the
thirty
dollars,
goes
to
the
database
admin
and
then
say:
there's
60
people
who
contribute
data
to
the
database.
Then
each
of
the
60
people
also
gets
a
dollar.
B
B
We
have
a
like
50
60,
70
80
people
show
up
each
time
and
we
also
started
a
decentralized
science
fair
where
we're
trying
to
do
a
science
pair,
but
do
it
on
the
blockchain,
and
the
idea
here
is
that
people
should
innovate
around
sort
of
the
scientific
process
in
normal
science
curves
you
how
you
innovate
around
the
scientific
question,
but
we're
seeing
a
lot
of
science,
we're
seeing
in
a
lot
of
science
that
there's
some
reproducibility
issues
and
there's
issues
in
the
way
that
science
is
done.
B
In
addition
to
the
research
question,
so
the
decentralized
science
fair
is
half
around
innovating
in
scientific
questions,
but
also
half
innovating
around
the
scientific
process.
So
if
people
come
up
with
interesting
new
ways
of
conducting
science
using
the
blockchain
or
technology
or
AI,
that's
how
that's
part
of
how
it
goes.
B
So,
if
you're
interested
here's
the
website-
and
you
can
reach
out
to
me
too-
if
you're
interested
in
sponsoring
or
getting
involved
as
a
participant
or
possibly
judge-
and
if
you
have
any
other
questions
here-
is
my
Telegram
and
more
info
on
design,
DB
Dao,
also
dot
XYZ
is
the
name
of
the
project.
Thank
you
guys.
A
Thank
you
so
much
Michael
love
the
vision,
wow,
that's
that's!
That's
a
lot
and
I
think
it's
really
well
needed
by
not
only
the
DSi
Community,
the
Dow
community
and
others.
I'll
I'll
give
you
one
question
just
to
get
started
and
then
I'll
give
some
space
for
for
other
folks
on
the
call
one
of
the
things.
That's
came
up
a
lot
in
the
decentralized
science
community
and
the
compute
Community
broadly,
is
this
concept
of
metadata
which,
which
you
guys
handle
really
well
folks,
talk
about
it
in
the
the
scientific
Community
for
wet
lab.
A
What
was
the
humidity
when
we're
doing
an
experiment?
What
was
the
temperature
for
reproducibility
and
those
sorts
of
things?
Do
you
think
tying
it
into
the
work
the
decentralized
scientists
might
be
doing
in
the
future
if
they
get
funding
for
their
research
through
an
IP
and
ft
and
then
they're
doing
some
work
that
leads
to
drug
Innovations
and
breakthroughs?
And
things
like
that?
Does
the
data
stored
at
DB
Dow
become
a
important
proprietary
part
of
the
work
that
they're
doing
or
for
reproducibility
or
for
other
marketing
purposes?
Is
there
a
tie-in
at
all.
B
Yeah
totally
I
mean
so
all
the
work,
all
the
data
that
is
stored
on
you
know,
DBI
was
sort
of
like
a
protocol,
so
it's
a
protocol
for
querying
and
storing
data.
So
what
you
can
do
is
you
can
take.
B
You
can
create
a
Merkle
tree
root
of
at
various
snapshots
in
time
and
saying
okay.
This
is
the
data
that
was
generated
up
until
this
point.
That
was
used
to
write
this
paper
and
you
can
say-
and
then
you
just
put
in
that
root
into
your
into
your
PDF
regarding,
like
metadata
I,
think
in
the
future.
B
Every
scientific
instrument
around,
like
you
know,
say
you
have
a
humidity
monitor
that
will
just
either
be
its
own
web
address.
That
will
have
various
calibration
stats
next
to
it
or
rely
or
or
a
person
and
a
device,
and
that
will
be
just
generating
data
and
every
person
that
generated
data
into
the
database.
That
is
then
eventually
creates
useful
IP,
we'll
given
will
be
given
a
percentage
of
the
IP.
B
A
D
I
think
you're
West.
Thank
you,
Michael
for
a
great
presentation,
just
wondering
yeah,
it's
a
unique
design
and
what
would
be
the
first,
the
primary
goal
of
dividel,
because
you
want
to
make
data
aquarium
you
you
want
to
have
this
database
interface,
but
in
the
process
it
feels
like
you
also.
Have
you
create
a
great
data
creation
system
with
this
system
of
a
rewarding
the
identification
of
good
data
compared
to
you
know
spanning
rows,
so
you
also,
and
the
Declaration
is
super
important
in
with
this
kind
of
protocol.
B
Yeah
I
think
the
main
thing
is
high
quality
data
I,
don't
think,
there's
any
substitute
for
Quality
quality
data
around
a
specific
topic.
So
definitely
the
the
main
focus
is
working
together
with
projects
that
are
doing
that
are
collecting,
diverse
and
need
high
quality
data
in
order
to
conduct
the
research.
The
the
SQL
queryability
is
pretty
easy
to
do,
and
the
rewarding
is
an
interesting
question
because
you
know
I
think
there's
three
things
we
do.
B
It's
like
one
is
curation
two
is
rewarding
and
three
is
you
know:
data
queryability
and
data.
Query
really
is
pretty
easy
and
the
curation
is
hard
and
working
together
with
people
is
hard,
but
then
the
rewarding
one
is
interesting,
because
I
think
a
lot
of
the
time
scientific
projects
aren't
looking
to
pay
people
directly.
We've
talked
with
a
couple
projects
and
they're
not
looking
to
immediately
monetize
data
through
usdc
or
to
Branch.
B
You
know
IP
between
people
just
because,
like
that's,
not
the
vibe
of
the
project,
like
the
people
who
are
collecting
mushroom
samples
from
the
woods
and
identifying
them,
aren't
doing
it
to
make
a
ton
of
money
they're
doing
it
because
they
enjoy
identifying
plants
and
mushrooms
or
birds
in
the
forest.
So
I
think
the
rewarding
people
through
like
impact
certificates,
is
interesting
and
and
rewarding
people
is
interesting,
but
it's
not
sort
of
the
priority
that
we've
seen
people
adopting
it
for
yeah.
B
So
the
main
that's
a
good
question,
I
think
the
the
main
thing
is
is
mostly
around
like
having
people
that
don't
know
each
other
all
opt
into
contributing
high
quality
data
sets
it's
creating
high
quality
data
sets.
D
Thank
you
so
much
yeah
yeah
and
having
these
rewards
being
done.
Programmatically
you,
you,
Foster
integration
with
other
protocols
and
yeah
incentivizing
people
to
store
good
data
won't
be
done
through
only
subscriptions
or
like
direct
incentives.
But
if
you
can
do
that
programmatically,
then
you
yeah
I,
think
you
Foster
integration
with
other
protocols
and
it
becomes
quite
quite
interesting.
Yeah.
Thank
you.
Mike
yeah,
yeah
yeah.
A
All
right,
I
think
we
will
probably
fill
up
the
rest
of
the
time
with
more
questions.
I
know,
I
definitely
have
more
for
you
as
well,
but
I
want
to
make
sure
we
give
the
polythene
folks
a
bit
of
time
as
well
Michael.
Thank
you
so
much.
We
will
Channel
everyone
else
to
go
into
the
compute
over
data
working
group
channel
in
the
slack
filecoin
slack
channel
for
additional
follow-ups.
So
I'll
sync
with
you
afterwards
to
make
sure
they
have
all
the
contact
information
for
you,
cool
for.
B
Dvd
I
will
stop
sharing
here
good
to
go
there
we
go.
Thank
you
thanks
thanks
again
Les
and
yeah.
Everyone
should
just
only
come
on
by
Desai
NYC
we're
having
another
one
on
February
16th,
which
everyone
is
invited
to.
A
C
C
So
I
will
take
over
for
the
next
10-15
minutes
or
so
to
talk
about
what
we've
done
as
a
work
for
the
integration
of
the
bacalio
network
in
airflow,
and
also
dag
so
to
to
try
and
have
some
the
telling
age
on
tasks
executed
over
the
network.
And
then
Philippe
will
take
over
to
talk
a
bit
about
holium,
which
is
like
our
approach
to
the
AG
and
how
we
think
it
could
actually
be
used
to
create
DG,
Over,
The,
Bachelor
Network.
C
So
first
off
I
want
to
say
a
thanks
to
Enrico,
which
is
here
today
with
us.
That
has
been
a
huge
help
on
our
work
and
we've
worked
quite
quite
closely
and
yeah.
We
could
do
that
thanks
to
him
so
The
Melody,
of
why
we
focused
on
having
deeg
over
the
baccalionate
work
was
to
be
able
to
track
different
tasks
being
executed
one
after
another
and
to
make
sure
that
we
could
read
data
lineage
over
inputs
and
outputs
from
those
tasks
and
the
different
method,
slash
executions,
computations
that
happen
over
the
network.
C
To
do
so,
we
first
looked
over
the
bachelor
airflow
provider,
and
so
this
is
the
job
that
has
been
done
by
Enrico,
where
he
created
what's
called
an
operator
in
airflow.
If
not,
everyone
is
familiar
with
it.
An
operator
is
in
airflow
is
actually
something
that
you
use
to
prompt
up
a
task,
a
task
which
is
one
part
of
your
dag,
with
which
you
can
think
to
some
input,
run
some
tasks,
so
it
can
be
a
computation.
C
It
can
be
the
Run
of
a
CLI
or
anything
and
then
create
some
outputs
over
computed
from
the
inputs,
and
so
with
this
operator
here
we
could
actually
run
some
Docker
tasks.
Over
The,
Bachelor
Network
even
run
some
wasn't
tasks
and
get
some
results
from
them,
but
rewards
for
zacking
was
the
possibility
to
actually
dive
into
the
different
operations
and
making
sure
that,
with
the
rubber
inputs,
we
get
the
proper
output
and
to
try
and
go
back
to
the
root.
C
I
would
say
of
the
execution
of
those
tasks
and
so
to
do
to
try
and
create
this
possibility
and
try
and
create
these
actually
feature
over
the
network.
We
used
something
called
open,
lineage,
which
we
might
all
be
familiar
with
it
already
and
so
open
Age.
What
it
is.
It's
Infinity,
it's
a
it's
a
layer
on
top
of
airflow
that
you
can
plug
and
thanks
to
a
backend
that
you
connect
to
it
so
with
DB.
Let's
save
to
to
simplify
things
like
markets,
for
example.
C
You
can
actually
get
some
metadata
from
your
different
tasks
that
you
run
and
ensure
that
you
can
store
them
and
read
them
through
through
and
through
during
the
whole
computation
of
the
tasks
that
you've
set
for
your
Dag
now,
what's
needed
for
the
whole
back
end
work,
and
for
you
to
be
able
to
collect
the
metadata
from
the
task
and
to
make
sure
that
you
can
read
every
inputs,
methods
and
outputs
that
you've
that
you've
used
in
your
execution.
Well,
for
that,
as
I
said,
we
need
open,
lineage
and
so
open
lineage.
C
How
do
we
implement
it
over
airflow?
Well,
you
implement
it
thanks
to
their
SDK
and
actually
only
two
method,
so
it's
quite
simple
to
integrate,
and
so
what
we've
done
here
is
actually
create
a
pull
request
over
the
current
operator
that
has
been
created
by
Enrico,
and
so
the
main
idea
is
that
to
integrate
this
execution,
this
metadata
layer,
what
you
need
to
do
here
is
actually
Implement
two
functions.
C
So
let
me
try
to
present
that
in
a
better
way,
maybe
two
functions
so
the
first
one
being
here,
the
get
open,
lineage
facets
on
start,
which
lets
you
before
any
tasks
that
you
did
so,
in
our
case,
an
execution
over
the
bacallion
network.
We
can
install
metadata
inside
of
our
Marquess
backend
and
the
second
one
is
get
open.
Lineage
facets
on
complete,
and
so
you
can
then
install
metadata
once
you've
executed
the
task
about
the
background
Network.
C
C
Now
it
has
to
be
noted
that
those
two
methods
are
only
for
a
happy
path.
We
do
not
currently
Implement,
for
example,
handling
of
errors
or,
like
failed
executions,
or
anything
like
that
reason
being
that
the
current
operator
that
we
are
working
with
is
something
that
was
created
as
a
POC
last
year
for
the
gathering
in
Lisbon
and
Enrico
is
currently
implementing
a
new
one,
and
so
we
are
waiting
for
this
one
to
be
fully
complete
to
make
sure
that
we
properly
integrate.
C
Also
other
cases
like
failed,
computations
or
anything
like
that
now,
what's
interesting
with
open
lineage
is
in
Marcus
is
that
they
get
actually
pretty
well
with
an
interface.
That's
very
useful,
so
I'll
do
the
demo
right
after
I'll
just
present
it
to
it
to
you
quite
roughly
here
for
now.
So
the
main
idea
is
that
we
have
an
interactive
playground
and
where
we
can
actually
find
our
dag.
C
So,
as
you
can
see
here,
for
example,
I
have
quite
a
simple
one:
where
I
can
I
can
see
that
it's
a
simple
Contour
where
I
add
like
a
generate
a
number
and
then
I
add
them
one
after
another,
and
so
with
that
I
can
see
that
what's
happening
in
my
dag,
what
I'm
soaring?
What
tasks
Etc,
and
so
we
can
differentiate
here,
the
different
jobs
that
I
have,
which
is
computation
with
little
wheel
here
and
the
the
place
where
we
stole
them.
C
Now
it
goes
with
these
diagrams,
and
we
can
also
see
in
history
of
events
of
events,
sorry,
where
we
can
see
every
executions
that
happen
and
everything
that
happens
now
for
a
quick
demo.
What
I
have
here
today
so
I
can
give
you
this
thing,
it's
quite
public.
So
if
you
want
to
to
give
a
look,
if
you're
interested
in
in
that,
please
feel
free
to
to
look
at
it.
I'll
share
it
right
right
after
I
finish.
C
My
part
of
the
presentation
now
I
have
here,
a
dag
which
is
running,
which
is
a
simple
integeration
of
bacaly
Oak,
and
so
it's
currently
using
the
operator
that
is
created
as
I
have
a
different
version
between
the
my
current
back,
all
your
clients
and
the
and
the
back
allele
client
over
the
server
is
not
fully
functional.
C
It's
not
fully
working
till
the
end,
but
still
if
I
go
to
my
Marquez
back-end
here,
I'll
be
able
to
see
in
the
events
that
I
have
different
jobs
that
are
running
and
that
are
being
stored
in
who's.
Meredith
are
being
stored
over
the
markets
back
in
and
so
for
those
backend
we
can
find
here
in
photos
those
tasks.
C
Sorry,
we
have
only
three
of
them
which
are
being
recorded,
so
we
have
number
generator
here
where
we
have
a
lot
of
different
information,
quite
a
lot
actually
so
from
the
start
date
to
the
task.
Ids
that
is
being
run.
We
can
also
see
what
type
of
task
it
is.
So
in
our
case
here
it's
a
bash
operator
because
we're
using
bash
to
Generate
random
random
number
on
our
local
computer.
Now,
if
I
go
over
to,
for
example,
bacalio
first
son
here
we
see
that
we
are
not
anymore
in
a
bacalio.
C
Sorry
we're
not
Simone
bash
operator.
We
are
in
The
Bachelor
operator
and
we
are
using
the
docker
run
operator
here
with
the
task.
Id
first
run
again,
I'm.
C
Sorry,
not
all
limited
data
are
here
right
now
demo
problem,
but
it
should
be
occurring,
and
so
the
idea
is
that
here,
in
this
complete
tasks
will
be
able
actually
to
find
here
inside
of
the
inputs,
the
different
inputs
that
interest
us,
and
so
that
would
be
the
client's
ID,
the
input,
CID
and
others,
while
once
sorry
for
the
inputs
once
it
starts,
and
when
it's
complete
for
the
output
we'll
be
able
to
find
the
CID
of
our
outputs
and
so
being
able
to
be
redirected
to
ipfs
to
look
at
them
now
in
terms
of
how
we
want
to
integrate
airflow
and
so
metadata
collection
with
our
opening
agent
markets
over
the
bacalia
network,
it's
still
quite
under
discussion,
the
problem
being
that
as
of
now,
this
needs
to
be
ransomware,
and
so
a
client,
for
example,
as
myself
has
to
run
both
of
those
nodes
and
so
to
make
sure
that
we
can
ensure
a
proper
access
to
those
features
to
any
clients
using
the
public
baccala
network.
C
It
will
actually
prove
useful
to
run
a
distributed
access
to
such
services
and
so
to
have
maybe
a
layer,
a
new
network
on
top
or
new
services,
on
top
of
the
bacalio
network.
That
could
provide
a
collection
of
metadata
of
any
job
that
needs
to
be
run.
So
that
would
be
one
way
to
give
access
to
our
users
to
the
collection
of
metadata.
C
Another
way
to
do
so
would
be
to
consider
that
all
users,
or
could
actually
be
people
that
are
already
familiar
with
their
flow
already
familiar
with
open
linear
engine
markets,
and
so
they
could
already
have
their
backend
running
it,
and
so
they
could
actually
have
spun
multiple
instances
of
those
back
ends
and
actually
collect
the
metadata
for
themselves.
Now.
C
This
is
not
something
that's
ideal,
because,
ideally,
in
the
end,
what
we
would
like
to
do
would
be
actually
to
publish
the
different
dag
information
publicly
and
to
be
able
to
prove
that,
from
an
execution
from
an
input
and
the
methods
we
produce
an
output
and
to
use
those
three
plates
of
information
to
be
able
to
share
it
with
anyone
else.
That
wants
to
run
the
tasks
and
so
I
will
now
hand
over
a
presentation
to
Philippe,
which
will
talk
about
a
bit
about
a
bit
more
about
this
vision.
D
Thank
you
very
much
Thomas.
Yes,
you
I'll
keep
it
short.
I'll
try
to
share
my
screen.
Maybe
to
us
you
can
yeah
give
me
the
hand
I'll
try
to
message
me
we're
talking
with
triplets
Triplets
of
information,
input,
data
method,
data
and
the
output
of
the
execution
of
a
meter
and
actually
I'll
take
a
step
back
from
before
working
on
on
this
airflow
operator
integration.
We
we
designed
starting
in
2021,
I,
think
a
protocol.
D
We
called
the
holium
protocol
and
then
we
met
the
bacalao
team
and
recorded
also
Luke
and
Kai
I
think
it
was
a
year
ago
and
we
engaged
discussion
based
on
this
protocol.
So
right
now
what
is
polium
at
the
moment?
It's
a
couple
of
things:
it's
a
design,
some
specifications,
an
implementation
in
Rust
and
that
can
be
used
for
a
CLI
and
actually
I'll.
Just
give
you
a
sense
of
what
holium
is
within
these
five
minutes
and
but
that
would
be
super
glad.
D
You
first
link
you
to
the
more
complete
documentation
and
obviously
maybe
have
an
informal
other
session
with
a
tutorial
or
or
maybe
get
deeper
if
you're
interested
in
in
this
protocol.
So
here
is
the
first
thing
that
could
be
probably
an
interesting
holium.org
and
documentation
is
a
docs.holium.org,
I
think
so
yeah.
We,
we
started
this
this
design
a
couple
of
years
ago
and
very
briefly,
where
does
Julian
lies
when
we
designed
it?
We
were
first
super
interested
into
you
know
DBT
as
a
tool.
D
The
the
like
pipelines,
based
on
the
extract
load,
transform
flow
that
that
was
gaining
quite
some
attention,
but
that
was
just
like
in
the
world
of
SQL
constrained
transformations
and
at
the
same
time,
because
of
our
background
to
us
and
myself
and
the
rest
of
the
team,
we
were
obviously
quite
interested
into
this
wave
of
maybe
commoditization,
of
data,
this
wave
of
what
all
blockchains
and
web
3
enable,
and
in
particular,
when
it
comes
to
data
storage,
you
know
DVT
was
pretty
useful
on
on
private
warehouses
and
we
we
saw
like
the
convergence
of
like
what
could
be
considered,
one
of
the
most
beautiful
like
public
data
Lake
biggest
data
Lake
and
the
the
whole
ipf
stack
and
an
evolution
of
the
SQL
constrained,
ELC
pipelines
that
we
wanted
to
to
open
to
more
generic
elt.
D
So
that's
basically
where
holium
lies
and
I'll
give
you
a
sense
of
how
we
built
it,
but
I'll
be
very
brief.
We
like
to
compare
the
the
this
protocol
to
to
DBT
on
three
on
on
three
layers.
First
I
addressed
the
the
execution
environment
here.
I
take
the
example
of
you
know
we
first
seed
executing
pipelines
with
executing
similar
tasks
and
steps,
and
here
I
take
the
the
example
of
a
simple
euclidean
division,
and
you
know
in
DBT.
D
Transformations
are
done
in
database
we're
using
a
SQL,
but
in
our
case,
for
many
reasons
we
we
wanted.
We
wanted
transformations
to
be
to
be
to
be
written
in
many
languages.
We
wanted
to
Foster
interoperability.
We
wanted
to
make
these
tasks
like
individual
elements
of
modular
pipelines,
and
so
we
simply
use
the
in
the
designer
containeries
containerization
solution
and
and
which
shows
at
the
time
to
go
with
a
wasm
execution
environment.
So
what
once
you
make
this
choice?
D
You
also
have
to
to
standardize
the
interface
between
two
steps
of
a
pipeline
and
that's
a
role
that
is
played
by
SQL
models
in
in
DBT
and
for
the
sake
of
rifty
I
I.
Simplify
it
saying
that
you
know
all
transformations
in
in
in
in
in
this,
in
this
design
are
Transformations
from
one
just
an
object
to
to
another
Json
object
that
can
be
handled
through,
like
you
know,
methods
written
in
many
in
many
languages.
So
that's
the
first
step
in
the
in
the
comparison
with
DVT.
D
We
also
had
to
design
another
format
for
data
that
goes
from
one
task
to
another,
because
we
want
to
change
these
tasks
and
we
wanted,
to
you,
know,
detach
any
any
keys
from
original
data.
So
I'll
keep
it
brief.
But
what
we?
D
What
we've
done,
is
we've
used
c
bar
some
kind
of
by
like
binary
like
efficient
equivalent
to
to
Jason,
and
we
try
to
remove
any
keys
from
from
from
this
format,
trying
to
find
deterministic
ways
to
to
transform
Json
Maps
into
a
race
and
vice
versa,
and
so
that's
what
we
we
ended
up
with
what
we
called
the
holy
m
c
bar
format,
and
you
know
starting
from
there,
so
we
we
transform
before
executing
any
transform
any
any
method.
We
transform
it
just
an
object
into
this
new
format.
D
This
is
omc1
format
and
then
executing
pipelines
is
just
as
simple
as
you
know,
connecting
some
outputs
from
from
from
an
execution
to
the
input
fields
of
another
execution,
and
so
in
this
example
running
the
euclidean
algorithm
to
to
find
the
greatest
common
divisor.
We
we
hear
you
know
we,
we
remove
any
contextual
information
using
holium
silver
and
we
just
connect
inputs
with
with
outputs.
D
You
you
there's
many
reasons
for
that
and
and
what
was
of
prime
interest
for
us
was
to
use
cids
and
design
the
right
ipld
schemas,
the
interplanetary
link
data
schemas
for
it
to
be
understandable,
interoperable
and
and
so
that
that's
why
what
we
did
essentially
with
holium
sibor.
You
have
scalar
and
Rick
data.
D
D
Any
recursive
data
is
also
based
on
cids
and,
what's
interesting,
is
that
obviously
byte
code,
but
also
like
meta
that
I
used
by
the
protocol
itself,
like
pipelines,
definitions
simply
identifies
is
identified
through
through
a
singular
cids,
so
unfortunately,
I
won't
get
deeper
today,
but
I
hope
it
gives
you
a
sense
of
what
the
volumes
protocol
we
designed
a
couple
of
years
ago
intended
to
be
what
it
led
us
to
with
these
discussions
with
Enrico
and
and
the
airflow
operators
we
are
designing
today.
D
Here
are
a
couple
of
links
where
you
can
find
more
information
on
this
project.
That
is
now
you
know
quite
posed,
but
still
useful
to
design
things
we
we
do
with
Enrico
and
and
Canal
team
today
and
if
we
yeah
there's
some
areas
of
improvements.
Obviously,
but
what
could
be
interesting?
Please
ping
us
on
slack.
If
we,
if
we
were
to
maybe
organize
a
another
session,
we're
going
to
available
how
to
use
the
CLI
how
to
get
into
the
the
first
implementation
we
did
in
rest,
that
could
prove
useful
I.
Think
so.
D
Thank
you
for
your
attention
and
thank
you
wish
for
letting
us
speak
today.
A
Yes,
thank
you
so
much
for
sharing
all
the
learnings
there.
I
love
the
approach
you
guys
are
taking
with
with
ETL
on
on
immutable
data
sets.
D
Thanks
yeah,
I,
I
think
there's.
You
know
there
are
so
many
things
we
want
to
do
at
the
same
time.
But
what
is
super
good
with
Enrico's
work
is
that
they
have
a
practical
approach,
I
think
and
that
actually
what
they
do
at
the
same
time,
they're
doing
it
they're
they're
trying
to
see
how
people
could
use
it,
and
so
slowly
eventually
we're
creating
something
that
is
useful
and
that
works
at
the
same
time.
A
Completely
agree:
finding
the
right
use
case
is
so
key
all
right.
Well,
if
no
one
else
has
any
other
questions,
we
can
wrap
up
and
thank
you
very
much
Michael
Thomas
Philly
for
joining
today
tremendous
content,
we're
going
to
post
this
on
YouTube
shortly
and
then
we
will
get
continue
the
conversation.
If
anyone
listening
would
like
to
go
to
the
filecoin.io
slack
URL,
they
can
join
the
filecoin
public
slack
Channel.