►
From YouTube: Compute Over Data Working Group 1st Community Session
Description
Welcome to the first meeting of the Compute Over Data Working group. Goals for the community include:
- Create a space for collaboration between all teams building decentralized -Compute Over Data platforms.
- Increase awareness of various solutions, and share best practices.
- Foster collaboration between projects through composable, reusable protocols and libraries.
Topics for this session:
- David Aronchick outlines the goals for the community.
- Each project shares a brief overview of their vision and their goals for the community.
Get involved at: https://www.cod.cloud/
A
And
okay
here
we
are
good
good
morning
good
day
to
everyone
and
thank
you
so
much
for
taking
your
time
to
join
us
for
the
first
compute
over
data
community
meetup
and
we're
very
much
honored
to
have
all
of
you
here.
It's
it's
a
it's
a
strong
showing
and
in
terms
of
logistics.
A
We've
got
a
lot
of
of
healthy
showing
from
different
projects,
and
I
want
to
give
every
team
an
opportunity
to
give
an
introduction
today
and
then
certainly
in
future
sessions,
we're
going
to
give
time
for
each
of
the
project
teams
to
go
into
much
more
detail
about
all
the
good
technology
that
they're
building.
So
with
that
david,
I
will
hand
it
over
to
you.
B
Thanks
so
much
so
my
hope
is
that
this
is
the
first
and
only
time
that
that
I
speak
to
you
as
kind
of
like
the
person
driving
this
group
for
better
or
worse
yeah.
My
name
is
david
roncheck
for
those
that
I
have
met
personally.
B
I
co-lead
research
development
here
at
protocol
labs
and
I'm
leading
our
protocol
labs
work
in
a
project
called
bakayow,
which
many
of
you
have
seen
it's
an
open
source
project
which
is
designed
to
be
a
reference
implementation
for
better
or
worse
of
compute
over
data,
where
you
are
executing
compute
next
to
your
file
coin
and
ipfs
nodes.
It
is
just
very
much
in
proof
of
concept
mode,
and
it
really
is
designed
to
be
a
reference
implementation.
B
Our
goal
with
with
the
project
is
to
provide
both
concepts
and
a
baseline
that
people
with
you
know
in
more
specialized
domains,
as
many
of
the
folks
on
this
call
are
in,
can
use
or
reuse
or
choose
to
swap
out
in
in
an
elegant
way
and
go
and
build
your
great
businesses.
On
top
of
my
background,
is
I'm
somewhat
new
to
filecoin
ipfs.
B
I've
only
been
here
about
nine
months
before
this
I
worked
very,
very
deeply
in
communities,
so
I
was
the
first
non-founding
pm
for
kubernetes,
which
I
led
for
three
years
on
behalf
of
google,
and
I
started
a
project
called
kubeflow
in
machine
learning
and
ai.
It's
a
machine
learning
platform
for
kubernetes
and
and
the
reason
I
bring
that
up
is
you
know:
I've
been
through
the
community
ringer
before,
and
I
cannot
stress
enough
how
much
it
is
part
of
my
dna
to
not
be
a
like,
hey
everyone.
B
We
need
to
go
do
this,
but
instead
to
find
a
path
that
we
all
can
see
ourselves
in
and
jointly
go
forward
together.
So
with
that,
my
hope
is,
you
may
have
seen
juan
has
talked
about
this
quite
a
bit
that
we
are
able
to
help
leverage
this.
You
know
huge
success
that
ipfs
and
filecoin
has
had
today
and
really
transform
it
in
this
new
direction.
B
Where
all
these
folks,
you
know
on
this
call-
and
hopefully
many
more-
are
able
to
leverage
that
data
and
and
take
it
in
new
and
interesting
directions
that
are
specific
to
the
needs
that
you
see,
because
we
at
kind
of
the
center
of
a
lot
of
this
stuff,
see
only
the
very
generic
needs
really
at
a
protocol
level.
So
that's
the
broad
strokes.
You
know
this
is
very
much
a
bootstrapping
process,
so
we
all
need
to
come
to
agreement
on
our
goals
on
regularity.
B
Like
lots
of
you
know
bookkeeping,
you
know
what
time
do
we
meet?
How
do?
How
are
we
going
to
elect
people?
How
are
we
going
to
set
the
direction
of
this
wes
and
I's
goal
is
just
like,
with
baccaya
being
a
reference
implementation
or
a
center
point
for
people
to
collaborate
on
as
far
as
tech.
We
also
want
to
provide
the
center
point
for
people
to
communicate
their
own
work,
so
we
will
take
on
the
the
action.
Should
this
group
be
okay
with
it
of
setting
up.
B
You
know
common
communication
channels
setting
up
websites
setting
up
things
like
that,
for
you
all
to
publish
your
own
work
and
I
would
love
to
given
that
I
kind
of
sit
in
both
camps,
talk
to
all
of
you
individually
about
your
your
goals
and
see
how
I
can
help
support
you
and
all
that
good
stuff.
So
that
really
is
it.
I
don't
want
to
take
up
any
more
of
your
time.
I
really
would
love
to
hear
from
everyone.
B
Do
you
have
any
questions,
I'm
happy
to
answer
any
questions
you
like
along
the
way,
but
other
than
that
I'll
leave
it
open
to
everyone
else
to
chat
or
ask
me
questions.
A
Brilliant
all
right!
Well,
what
we
can
do
for
for
starters,
then,
is
to
jump
into
each
individual
project
team
and
if
we
could
take
maybe
five
to
seven
minutes.
I
love
to
hear
from
each
team,
particularly
you
know
what
is
unique
about
your
vision
for
for
compute
what
problems
you're
trying
to
solve.
A
If
you
see
standards
or
opportunities
that
the
community
should
be
focusing
on
to
make
your
work
more,
a
higher
value,
higher
value
ad
and
then
any
other
aspects
of
the
community
can
help
bring
to
your
project,
whether
it's
awareness
or
common
standards
build
out
or
other
things
like
that.
So
if
we
can
charles
from
phil
swan,
can
you
can
you
hear
us?
Okay?
D
Hey
great,
so
this
is
charles
from
first
one
team.
So
I'm
sorry,
I
feel
noisy
here
like
yeah.
So
first
of
all
is
the
leader
to
crosstalk
storage
provider
and
we
are
aiming
to
bridging
different
payment
solutions
to
storage
like
a
firepoint
or
other
computing
solutions.
Tenderly
like
we
are
looking
for
a
good
computing,
blockchain
or
solutions
that
we
can
bridge
into
so
we'll
analyze,
something
like
icp
cache
network.
D
I
have
pakuyao
or
kudos.
I
said
they
are
online
and
online
as
well,
so
we
haven't
decided
which
one
we
are
building
next.
So
currently
we
have
three
data,
centers
running,
right,
client,
storage
in
canada
and
the
us.
We
wanted
to
build,
build
a
decentralized
distributed
computing
and
storage
network,
but
our
goal
is
not
building
all
the
problems
by
ourselves,
but
more
like
doing
bridging.
So
we
can
using
one
token,
for
example,
yes
usdc
or
other
tokens
to
make
a
payment
for
different
weapons,
three
on
different
chains.
D
That's
our
goal!
So
initially
we
just
finished
the
integration
from
polygon
network
to
price
point.
So
this
afternoon,
I'm
going
to
visit
the
transief
to
see
our
bugger
dot
solutions
cross
chain.
So
you
can
can
see
if
we
got
their
solution
with
us
merged
together,
so
we
can
have
a
two
payment
chain
with
one
story
solutions.
So
I'm
very
happy
to
be
part
over
here
and
I
would
like
to
know
like
are
the
solution
provided
also
the
id?
So
we
are
starting
using
voltage
to
provide
a
different
level
of
access
control
to
ipfs
yeah.
E
Great
and
I
actually
ended
up
creating
a
short
little
presentations
that
I
will
flip
through.
So
my
name
is
boris
mann
based
in
vancouver.
My
team
is
distributed
all
over
the
place,
I'm
the
ceo,
so
less
technical
half,
but
obviously
we're
all
in
a
a
very
technical
field.
We're
building
an
entire
edge
computing
stack,
we're
very
much
focused
on
when
we
say
edge.
We
actually
mean
client-side
local.
E
First
software,
where
people
are
building
the
in
the
front-end,
whether
that's
native
mobile
or
on
the
web,
and
we're
trying
to
make
it
very
easy
for
developers
to
build
applications
without
having
to
be
devops
experts
and
everything
else
like
that,
so
our
web
native
sdk
features
did
based
accounts
that
link
to
browsers
and
desktops.
We
also
did
some
work
with
putting
file
coin
keys
in
the
browser
securely
passwordless
login.
E
There's
lots
of
things
happening
in
this
space
right
now,
brooke
my
co-founder
developed
an
encrypted
file
system
on
top
of
ipfs,
so
that
actually
lets
us
do
differential
access
to
private
data.
It
has
versioning
built
in
and,
of
course,
public
file
sharing
through
ipfs,
giving
you
portable
data,
and
all
this
is
read
and
write
to
ipfs
natively
in
the
browser,
rather
than
just
a
data
substrate
building
blocks
ipfs,
I
think
ipfs
and
content.
Addressing
is
a
sustaining
innovation.
It's
a
huge
commons
network.
E
We
have
lots
more
work
to
do
to
to
scale
it
and
improve
it
in
the
dht
and
everything
else
like
that.
But
having
a
global
addressing
space
where
the
data
is
verifiable,
we
think
is
a
huge
thing
and
we're
betting
a
lot
on
it.
Lots
of
browser
apis
coming
along
the
web
crypto
api
is
something
that
a
lot
of
people
know
about.
It
is
non-exportable
private
keys
in
the
browser,
and
we
think
that
the
browser
is
a
great
target
for
doing
all.
E
Sorts
of
things
do
not
bet
against
javascript
and
then,
of
course,
the
ids
that
space
has
been
growing
very
well
and
you've
got
some
other
interesting
things
happening
with
verifiable
credentials.
We've
been
in
the
did
space
for
some
time
and
there's
lots
of
good
things
happening.
E
We
are
very
much
a
protocol
engineering
shop
as
part
of
what
we
do.
We
have
an
applied
research
group
that
is
led
by
my
co-founder
brooke
and
quinn
is
the
is
the
second
person
in
in
in
that
group.
We
have
designed
shipped
and
had
an
essentially
graduated
ucan
outside
of
fission.
E
It's
got
a
working
group
protocol
labs
has
adopted
it.
The
twitter
bluesky
project
is
doing
some
work
there.
Jack's
block
and
web5
is
using
some
of
those
components
and
generally
we've
been
very
happy
with
with
people
adopting
that.
Basically,
if
you
want
a
decentralized
version
of
oauth
that
lets
you
delegate
capabilities
securely
from
one
service
to
another,
the
same
kind
of
cross-chain,
cross-integration
kind
of
thing,
we'd
love
for
you
to
take
a
look.
E
Private
data
on
ipfs
is
a
missing
piece,
we're
working
very
hard
to
try
and
make
winfs
a
piece
that
can
be
dropped
in
and
used
everywhere
and
then
finally,
we're
taking
our
work
in
file
system
and
working
on
a
full
edge
database
with
crdt
capabilities.
So
that's
just
in
research-
and
I
can
share
some
links
about
that.
E
So
I
wanted
to
do
a
little
revision.
Things
of
how
I
think
about
ipfs,
so
I
think
every
human
on
the
planet
should
be
able
to
store
data
online
forever
effectively
for
free,
and
I
mean
that
just
from
a
capabilities
perspective,
obviously
big
data
enterprise
companies
will
be
paying
and
that
we
should
aim
for
some
of
the
same
developer
experience
as
a
commercial
platform
in
the
protocol
itself
and
then
identity
data
and
compute.
E
As
a
commons
network,
which
is
really
the
topic
here,
other
than
betting,
not
betting
against
javascript
webassembly
is
going
to
be
our
focus
for
compute
over
data,
we're
moving
our
stack
to
rust
generally
making
it
work
everywhere.
We
think
that
there
is
going
to
be
lots
more
adoption
of
wasm
by
front-end
developers,
who
typically
have
not
been
a
target.
E
We
have
some
research
planned
around
content.
Address
wasm
functions
so
again
picture
all
of
these
things
having
addresses
and
as
pure
functions.
You
can
also
content,
address
the
arguments
and
even
the
answer,
and
so
there
might
be
some
interesting
things
to
do
there.
So,
if
you're
interested
in
doing
things
around
that,
we
don't
think
that
we're
going
to
lead
or
invent
a
bunch
of
things
there,
but
we
think
that
we've
got
a
stack
that
fits
together
really
nicely.
E
So
if
you
would
like
to
have
capabilities
between
services
that
are
run
p2p
or
if
you're
interested
in
private
data,
I
won't
go
over
this
list,
but
I'll
include
the
the
link.
These
are
some
of
the
upcoming
projects.
It's
a
very
long
list,
we're
not
going
necessarily
ahead
with
all
of
them.
But
probably
one
of
interest
to
this
group
is
winfs
and
file.
E
Point
app
to
add
end
to
end
private
data,
so
that
that
should
give
other
capabilities
and
I'll
share
links
to
our
discord
and
other
places
where
you
can
come,
hang
out
with
us
lovely
to
meet
everyone.
A
F
Okay,
hello,
everyone.
This
is
william
from
ken
labs.
I
think
we
also
have
touching
here
in
in
the
cold,
so
we
are
both
from
kim
labs.
So
currently
we
are
working
on
a
sidechain
metadata
storage.
So
you
see
here
the
topic
compute
over
data.
So
that's
our
mission
and
we
are
going
to
build.
Actually,
we
are
also
building
the
infrastructure.
F
F
Currently
we
are
working
on
the
service
called
pando,
probably
you
you
may
or
may
not
be
aware
of
it
anyway.
It's
a
storage
for
verifiable
and
structured
data
for
now,
and
it's
initially
for
the
ecosystem
running
as
a
sidechain
metadata
store.
F
In
the
future,
we
are
looking
to
see
a
panel
network
which
will
link
the
structured
data
in
the
whole
decentralized
network,
so
in
general,
you
can
see
we
hope
to
see
a
hub
network
linked
with
the
pando
node,
so
that
the
client
can
save
the
structured
structured
data
to
the
network.
At
the
same
time,
they
are
able
to
carry
these
verifiable
data
over
the
the
very
familiar
interface,
say:
sql,
graphql
and
even
basin.
F
F
F
When
we
arrive
this
stage,
we
will
see
more
and
more
computation
on
top
of
pando,
so
this
is
what
we
we
like
to
to
to
have
in
the
future,
so
pando
as
the
storage
infrastructure.
On
top
of
that,
we
have
a
network
with
the
orchestrators
and
the
worker
nodes
so
that
each
transaction
or
computation
can
be
scheduled
to
different
workers,
so
it
can
be
run
as
a
network.
F
F
Ideally,
we
want
to
integrate
or
adapt
to
the
existing
big
data
processing
framework
so
that
it
can
be
widely
used
and
deployed
to
the
web
two
businesses.
So,
for
example,
spark
apache
spark
is
a
very
popular
framework
that
has
been
used
widely
used
so
that
we
want
to
build
the
adapter
layer
or
driver
layer.
So
that's
the
the
panda
storage
or
the
even
the
network
can
be
adapted
to
to
this
apache
spark
framework,
so
integrate
into
the
the
mainstream
data
processing
and
the
analysis
use
case.
F
So
this
is
what
we
are
visually,
the
future
of
the
the
pando
network.
The
last
thing
I
think,
I'd
like
to
share
is
eventually
we
want
to
see
a
website
data
lake.
It
means
we
don't
have
to
focus
on
structured
data,
so
data
can
even
be
unstructured,
so
the
pandora
can
be
such
pando
network
can
be
such
a
data
storage
for
web
three,
so
no
matter
structured
on
unstructured.
F
F
A
G
C
Yeah
I
can
share
some
slides,
so
I'm
evgeni
co-founder
of
fluentz.
What
we
do
is
basically
we
are
trying
to
bring
serverless
experience
into
web3
space
and
fluence
like
at
fluence.
Basically,
we
do
several
things
like
at
the
larger
scale.
We
call
it
decentralized
computing
or
like
a
particular
application
platform
or
cloud
marketplace.
It's
all
kind
of
the
same
thing,
but
the
way
it
kind
of
matches
on
the
web
to
stack
is
we
have
two
bigger
things.
C
Is
web
assembly
runtime
that
basically
can
run
webassembly
functions
on
any
network
node
and
we
have
like
certain
nice
features
like
web
assembly
interface
types
and
wazi.
C
So,
like
you,
can
just
compose
modules
from
assembly
modules,
written
in
different
languages
together
and
it's
all
runs
fine,
and
then
we
have
the
aqua
language,
which
is
the
something
that
replaces
workflows
or
step
functions.
It's
a
basically
a
language
where
you
define
the
control
plane
on
top
of
web
assembly
functions,
sitting
on
different
nodes
and
network
and
using
aqua.
You
can
basically
create
different
kind
of
distributed
algorithms
and
replace
certain
cloud
components.
You
can
define
algorithms
for
load,
balancing
scaling,
routing
orchestration,
deploy
like
and
things
like
consensus
engine
certification.
C
C
Yeah
aqua
is
also
like
calculus
based
then
the
whole
idea
is
to
just
you
know,
avoid
this
centralized
bottlenecks
like
api
gateways
and
things
like
that
and
allow
build
peer-to-peer
scenarios
between
client
devices,
but
also
enable
peer-to-peer
communication
in
terms
of
servers
in
terms
of
machines
that
run
computation
for
users
on
the
back
end
and
fluence
is
basically
the
network
is
heterogeneous,
so,
like
every
fluent
sphere,
is
running
aqua,
running
marine
and
also
plugged
with
different
sources
of
data.
So
by
default
we
distribute
fluent
spears
with
ipfs
access,
but
it's
going
to
be
filecoin.
C
It's
going
to
be
blockchains
access,
so
you
can
like
read
data
from
anywhere.
You
can
compute
it
and
write
data
to
anywhere
and
you
can
basically
discover
and
select
whatever
nodes
on
the
network
provide
you
with
our
access
to
to
which
our
data
and
the
basically
the
the
end
goal
is
to
have
as
many
cloud
components
available
as
as
aqua
libraries
as
possible
and
like
have
them
deployed
on
the
network
and
have
all
of
this
available
and
and
and
as
an
incentivized
network.
C
So
currently
the
network,
the
development
stack,
is
live
and
running,
we're
constantly
working
on
improvements
and
stability,
but
the
economics
is
not
there
yet,
so
there
is
no
way
to
pay
for
now.
So
you
better
run
your
own
notes
or
rely
on
community
notes,
but
that's
that's
the
way
how
it
works
like
it
works
like
ipfs.
Basically,
right
now
from
the
incentivization
perspective,
yeah-
and
I
guess
that's
it.
G
But
just
to
add
to
that
real
quick.
So
from
a
from
a
from
our
perspective,
some
of
the
things
we're
looking
forward
is
sort
of
a
common
compute,
we're
using
wise
mighty
so
common
compute
model
and
model
descriptions
that
we
things
we're
really
interested
in
possibly
coming
out
of
this
verifiable
compute
standards,
compute
proof
standards.
I
think
everybody's
working
on
this
in
in
different
ways.
So
I
think
this
will
also
be
sort
of
a
area
we'd
be
interested
in
collaborating
with
others
same
with
sort
of
reuse
of
compute
modules
themselves.
G
I
I
think
again,
everybody's
working
on
these
mods
and
modules,
so
there's
got
to
be
a
lot
of
overlay
and
things
like
that
spark
adapter.
I
just
saw
it
looked
to
me.
It
looks
like
really
interesting.
You
can
almost
standardize
around
and
and
from
from
a
community
helping
fluence.
Obviously,
awareness
is
a
big
deal,
we're
big
believers
in
coopetation,
so
some
so.
I
think
this
would
be
extremely
helpful.
Iron
sharpens
iron,
so
this
is
something
where
we
would
be
looking
forward
and
and
definitely
avoid
duplication
of
dead
ends.
G
I
think
this
is
this.
Is
that
there's
a
time
to
mark?
I
don't
know
pressure
in
general.
I
think
that
we
all
need
to
deliver
the
the
the
2017
is
over
and
the
more
we
can
avoid
failures
and
duplications
in
what
seems
to
be
overlapping
research
agendas.
I
think
this
would
be
something
at
least.
We
would
be
very
appreciative
as
part
of
a
community
benefits.
A
Very
good,
thank
you
bernard
and
you
have
jenny
and
especially
your
opinions
about
the
community
and
standards.
That's
that's
gonna
help
all
of
us.
I
think-
and
I
think
that's
certainly
a
lot
of
a
lot
of
commonalities,
starting
to
form
which
is
super
helpful,
so
we
will
next
turn
it
over
to
al
and
the
coin
network.
H
Hey
folks,
sorry,
I've
just
been
slightly
tuned
in
listening
here,
great
to
see
all
of
you
we're
kind
of
new
to
the
ipfs
and
falcon
ecosystem.
Though
I
did
write
a
course
on
ipfs
back
in
about
2018.
used
to
run
a
company
called
we
teach
blockchain
in
chicago
and
mostly
we've
been
doing
it.
Koi
is
trying
to
figure
out
how
to
onboard
more
people
as
node
operators
throughout
all
these
systems
that
we
have
so
kind
of
the
way
that
works.
H
We
have
sort
of
a
social
economy
similar
to
basic
attention
tokens
where
we
mint
tokens
based
on
how
much
traffic
people
get.
This
allows
us
to
build
up
sort
of
a
reputation
profile
for
individuals,
so
we've
been
creating
this
for
about
a
year
and
a
half.
Now
last
year
we
got
about
30
million
views
on
the
network.
We've
got
about
45
000
people
signed
up
to
run
nodes.
Now
what
our
node
looks
like
it's:
basically
a
arbitrary
compute
environment,
mostly
designed
to
run
on
personal
devices
like
phones
and
computers.
H
We
started
out
doing
a
lot
of
web
scraping
and
kind
of
content.
Ingestion
we've
been
working
on
trying
to
build
out
sort
of
edge,
node
and
kind
of
ipfs
pinning
use
cases,
and
that
kind
of
thing
lately.
What
else
is
interesting?
I
guess
probably
the
big
thing
that's
going
to
be
helpful
for
people
here
is
that
we
can
pretty.
H
You
these
nodes
to
use
these
people,
have
all
signed
up
to
run
the
node
and
earn
tokens,
so
we
have
a
fairly
large
foundation
treasury
and
we
can
hand
out
tokens
to
all
of
you
if
you'd
like
to
use
our
nodes.
If
you
have
a
use
case
currently
the
container
is
a
javascript
container,
so
you
can
run
any
npm
module,
typescript
or
javascript.
You
get
a
rest
api,
redus
cache
file,
store
access
and
there's
an
ipfs
node
in
there
now
and
there's
a
bunch
of
other
kind
of
cool
new
features
we're
working
on.
H
We
have
some
neat
things
as
well
around
dids,
so
you
can
get
attestations
on
a
node
to
verify
where
it
is
in
the
world.
We
do
kind
of
a
triangulation
routine.
So
we
have
a
number
of
nodes
that
we
know
where
they
are
and
then
the
rest
of
them.
We
can
kind
of
piece
together
where
they
are
from
that
work.
H
On
doing
other
attestations
like
if
somebody
is
like
green
certified,
if
they're
doing
a
hosting
node
and
they're
in
a
particular
data
center,
any
of
that
kind
of
stuff
could
be
helpful
for
all
of
you.
The
other
side
of
it
that
we
are
actively
trying
to
expand
on
is
potentially
financing
for
node
operators,
so
we
got
a
pretty
huge
grant
from
ibm
last
year,
we're
working
on
getting
one
from
aws
right
now.
H
So
a
lot
of
the
node
operators
who
are
using
our
desktop
nodes
will
also
eventually
be
able
to
upgrade
and
get
a
hosted
node,
and
what
that
will
mean
hopefully,
is
that
they'll
have
lots
of
kind
of,
hopefully
co-located
data
center,
hosting
that
they
can.
Then
you
know
actually
run
pretty
high
level
compute
on.
They
can
run
a
rest
api
for
you.
They
can
pin
ipfs
files,
make
them
accessible
to
web
a
lot
of
that
stuff
essentially.
H
But
if
anybody
wants
to
follow
up
on
any
of
this
stuff,
I'll
drop
my
telegram
handle
into
the
chat
here
and
we
can
chat
on
there
or
I'm
also
on
the
the
file
coin.
Slack
now
as
al
morris,
but
yeah.
H
Closely
from
the
ipfs
world
and
from
our
weave
very
interested
to
start
getting
into
setting
up
some
asteroid
nodes
and
all
that
kind
of
stuff
in
the
future.
A
J
Hi
hello:
this
is
saitora
from
nevermind.
Sorry
on
cooldenk10,
some
family
issues
this
along
right,
but
yeah.
It
is
what
it
is
I'm
outside.
So
I
can
put
my
camera
on
just
for
a
second
to
say,
hello,
but
in
order
to
save
some,
my
bandwidth,
I'm
going
to
disconnect
the
camera
and
so
cto
what
nevermind,
what
previously
kiko
normative
has
been
out
of.
J
What
we
have
is
put
in
always
an
asset
in
the
center.
We
facilitate
that
the
different
kind
of
services
or
use
cases
on
top
of
this
digital
asset.
One
of
them
is
traditional
data
sharing
right.
So,
and
when,
because
pregnancy
constraints,
we
cannot
allow
this
asserting,
we
have
a
solution
for
data
institute,
computation
or
computer
data,
depending
on
how
you
want
to
name
it.
Basically,
we
started
on
this
specific
part
of
the
product
very
focused
in
problems
that
traditional
web
2
companies
have.
J
Basically
I
mean
it
is
a
company
is
the
big
data
lakes,
a
warehouse
environment.
They
have
a
data
in
some
places.
They
don't
want
to
move
this
data,
but
they
want
to
allow
some
level
of
computation
on
that.
So
we
started
with
never
mind
this
computer
orchestrator.
That
is
allowing
to
run
some
traditional
analytics
use
cases.
J
The
typical
use
cases
that
you,
in
a
way
when
you
require
a
spark
fling
we
have
all
of
that,
is
in
something
that
is
allowing
to
orchestrate
the
theoretical
learning.
That
also
is
very
interesting
for
some
use
cases.
For
example,
we
have
a
one
that
is
very
nice
in
order
to
allow
different
banks
to
augment
the
accuracy
in
the
credit
card
effort
detection.
J
So
we
orchestrate
all
of
this,
allowing
to
move
a
computation
where
the
data
is,
but
nevermind
is
a
generic
framework
allowing
this
kind
of
use
cases,
and
we
are
looking
to
expand
these
computer
capabilities
to
use
cases
where
maybe
this
computation
needs
to
happen
on
chain.
So
this
totally
different
kind
of
use
cases
and
also
some
use
cases
where
maybe
this
computation
needs
to
happen
very
close
to
a
minor
so-
and
this
is
because
I'm
here
this
is
the
mostly
what
we're
doing
never
mind.
J
If
you
have
any
questions,
please
feel
free
to
ask
just
to
finalize
to
relay
to
this
message
that
I
was
accepting.
This
call
everything
that
is
about
pushing
standards
working
together,
common
research.
This
kind
of
stuff,
I
think,
is
benefit
for.
I
Hi
everyone
the
lead
on
this
one.
Let
me
share
my
slides,
real,
quick,
so
yeah,
I'm
sergey
and
eli
is
my
co-founder
him
canoe.
So
what
we're
we're?
Basically
a
deep
tech
company
here
in
vancouver
building
implant
scale
data
pipeline
and
to
explain
what
we
are.
I'm
gonna
start
with
the
parts
of
the
problem
statement
that
are
kind
of
unique
to
us,
so
it's
our
take
on
what
web3
data
is.
So
we
believe
that
web3
data
is
going
to
be
dynamic.
I
So
a
strong
believer
is
that
most
of
the
valuable
data
flows
today
are
actually
continuous
and
long
living.
Like
the
data
produced
by
d5
the
exchange
rate,
the
data
coming
from
the
iot
devices,
it
doesn't
appear
and
disappear
overnight.
Right,
it's
stable,
it's
it's
there.
It
continues
to
pump
data,
so
we
need
to
be
able,
as
a
society,
to
be
able
to
make
real-time
insights
from
this
data,
and
for
this
we
need
some
sort
of
autonomous
processing.
I
So
what
we
have
right
now,
unfortunately,
is
far
from
that.
Most
of
our
workflows
is
highly
manual.
Human
is
constantly
in
the
loop
we're
constantly
getting
outdated
results,
so
we're
deeply
stuck
in
this
batch
processing
local
optima.
So
this
is
something
that
we're
trying
to
get
out
of,
because
batch
processing
on
dynamic
data
is
dangerous
because
it
most
of
the
time
produces
incorrect
results
and
it's
also
expensive.
I
We
believe,
of
course,
in
decentralization-
and
by
this
I
mean
many
more
small
and
micro
data
publishers
come
into
the
space,
so
of
course
personal
data,
but
also
iot
device
on
its
own
being
a
micro
data
publisher.
And
if
you
apply
this
to
a
lot
of
web3
data
solutions
today,
they
simply
don't
scale
to
this
type
of
granularity.
I
I
There
are
k
for
developers,
but
we
believe
it's
a
nightmare
for
data
scientist,
so
not
something
that
we
want
to
continue
with
and,
of
course,
data
needs
to
be
trustworthy,
because
if
not,
it's
completely
useless,
and
here
we're
talking
about
bringing
verifiability
and
accountability
but
to
the
big
data
scale.
We
believe
that
trust
chains
need
to
extend
to
the
origin
of
data
so
to
the
device
that
publishes
data
or
to
the
business
flow
that
generates
this
data
right
now.
I
I
People
on
our
network
build
autonomous
data
pipelines,
so
the
whole
idea
is
for
a
person
to
write
query
once
and
the
network
just
continues
to
operate
this
query
forever
and
we're
focusing
on
collaboration
and
reuse,
so
people
working
together
to
build
these
open
data
feeds
we're
bringing
verifiability
to
source
data
and
all
the
processing
done
by
the
network
is
deterministic
for
reproducibility.
I
So
trust
is
basically
anchored
on
the
publisher
side
and,
like
I
said,
blockchain
is
part
of
that
solution.
It
can
act
both
as
a
source
of
data
and
also
as
a
consumer
of
data
and
also
can
interact
with
the
network
as
the
sort
of
control
plane
so
directing,
where
the
the
or
what
type
of
data
processing
should
be
going
on.
So
this
is
how
it
looks
like
it's.
I
Basically,
what
you
see
is
a
typical
data
pipeline
enterprise
data
pipeline,
except
in
our
case,
every
step
here
can
be
owned
by
a
different
person,
different
organization,
so
data
from
coming
from
the
publisher.
We
turn
it
into
our
ledger-like
data
format.
It
can
be
on
any
storage,
ipfs
included
and
people
can
discover
data
sets
on
this
network
and
still
start
building
new
data
sets
by
composing
code.
So
in
our
case,
it's
a
streaming
scale
right
now.
I
So
you
can
try
all
this
today,
we
basically
are
have
our
commission,
which
we
call
our
git
for
data
pipelines.
It
has
two
engines
integrated
right
now,
so
the
engines
are
kind
of
plugged
in
we're
using
apache
spark
the
streaming
subset
of
it
and
apache
link.
It
comes
with
ipfs
support,
so
you
can
share
both
your
data
and
your
data
pipelines
in
ipfs
and
there's
also
a
nice
jupyter
integration
web
ui.
So
the
easiest
way
to
try
it
is
you
can
just
go
to
democrat
the
dev
and
yeah,
give
it
a
try
for
yourself.
I
I
Areas
of
interest
for
us
include
ipld
and
ipfs,
so
improving
the
tooling,
bringing
the
schemas
and
code
game
to
ipld,
etc,
and
we're
also
interested
in
the
id
based
auth
and
access
control
for
data.
So
this
is
immediately
some
interesting
possibilities.
I
see
here
with
ucan
and
wnfs
and
also,
of
course,
some
verifiable
compute
and
privacy,
especially
differential
privacy
for
data
and
the
data
processing,
and
especially
streaming
data
processing,
bringing
it
to
sql
to
process
the
blockchain
data
and
also
achieving
fine
grained
promise,
so
yeah
I'm
going
to
share
with
these
slides
afterwards.
A
Thank
you
sergey
all
right
beautiful!
Well,
I
have
to
say
I'm
super
energized
by
just
the
breadth
of
projects
everyone's
working
on
and
thank
you
all
so
much
for
taking
your
time
to
help
us
really
start
building
this
community
from
the
ground
up.
Please
do
also
check
in
on
the
slack
channel
afterwards.
You
know
for
any
additional
comments,
and
I
notice
folks
are
starting
to
make
connections
which
is
excellent
and
we'll
have
some
website
material
coming
out
soon.
So
much
much
more
to
come.