►
From YouTube: Compute Over Data Working Group 2nd Session
Description
On today's meetup of the Compute Over Data Working group:
- Daniel Baker from Qubit9 discussed their project's architectural goals: https://www.qubit9.com/
- Parth Shukla from 180 Protocol shares an overview and demo of their project: https://www.180protocol.com/
- David Aronchick leads a discussion around shared standards for compute workload invocation.
Get involved at: https://www.cod.cloud/
Follow us at: https://twitter.com/codworkinggroup
A
A
We
are
very
fortunate
to
have
daniel
from
the
qubit
9
team
and
parth
from
180
protocol,
and
today's
agenda
is
going
to
be
much
deeper
dive
on
each
individual
project,
getting
to
know
a
little
bit
more
about
their
use
cases
and
and
a
little
bit
of
designs
as
much
as
time
allows
and
then
at
the
very
till
end
david
and
I
are
going
to
come
back
with
some
updates
about
the
website,
we're
building
for
the
community
and
a
couple
of
other
potential
technology
standards
that
we're
seeing
pop
up
as
well
so
daniel,
if
you're
ready
I'll
hand
it
over
to
you
and
I'll.
B
Great
thanks,
thanks
for
inviting
me
not
presenting
today,
I'm
just
going
to
verbally
talk
about
what
we're
working
on
some
of
the
things
a
little
bit
undercover
at
the
moment
and
so
we're
still
working
on
things.
In
the
background,
our
goal
is
to
take
advantage
of
decentralized
web
services
and
I
say
web
services
deliberately
because
we
don't
want
to
focus
in
one
area
where
we
say:
oh,
let's
go
web
3
or
let's
go
pure
blockchain.
B
Let's
go
just
storage,
let's
go
adjust,
you
know,
aggregated
marketplace
compute
and
things
like
this.
So
what
we're
aiming
to
do
is
fill
that
gap
that
enterprises
are
looking
for
in
a
in
a
cloud
platform
that
is
delivered
and
sold
through
distributed
services.
How
distributed
services
look
can
take
many
different
forms.
You've
got
obviously
you
know,
blockchain
solutions,
you've
got
pure
d.
Apps
you've
got
classic
architecture
that
you
can
aggregate
together.
B
There's
a
lot
of
orchestration
that
has
to
happen
in
in
the
background,
in
the
back
end,
to
make
those
things
work
and
to
make
them
work
in
harmony
as
well,
and
so
one
of
the
the
things
we
want
to
eventually
do
is
have
a
cloud
platform-like
experience
for
developers
and
enterprises
around
the
world.
So
it's
touching
on
many
different
aspects.
I'm
I'm
certain.
B
We
can't
solve
all
of
this
and
we
need
to
partner
with
a
lot
of
people,
to
bring
the
right
solutions
together
and
integrate
things,
but
we
are
very,
very
determined
to
take
advantage
of
decentralized
compute
around
the
world,
because
the
scale
opportunities
are
immense.
I'm
a
strong
believer
that
four
companies
should
not
dictate
how
the
internet
operates
either.
So
we
have
a
very
strong
moral
belief
in
this
company
that
I've
built
here
that
we
want
to
put
that
back
with
the
people
and
align
and
build
the
internet.
B
The
way
it
was
originally
intended
to
be,
which
was
loosely
coupled
systems
that
solve
problems
right.
Our
first
launch
product
will
be
around
decentralized
object,
storage,
and
so
the
use
case
that
we
are
aiming
for
is
around
like
backup,
personas
and
developers.
So
looking
at
large
enterprises
that
have
massive
data
storage
needs
that
are
largely
unchanged
and
we
want
to
offer
some
immutability
to
those
backups
so
that
they're
protected
against
ransomware.
B
B
These
are
all
important
things
to
our
customers
that
we're
focusing
on
as
well
and
certifying
their
backup
software.
So
lots
of
backup
software
around
the
world
offers
s3
compatible,
back-end
storage
mediums
and
we're
focusing
on
certifying
those
so
that
customers
have
a
high
guarantee
that
all
of
their
scenarios
will
work
and
obviously,
if
something
doesn't
will
engineer
it
so
that
it
will
starting
with
object,
storage
and
then
expanding
our
portfolio
over
time.
When
that
starts
to
take
off
to
look
at
other
services
that
either
complement
that
or
even
broaden
the
personas.
B
B
That
makes
sense
for
our
customers
based
on
what
we're
seeing
is
use
and
what
the
market's
telling
us
is
needed
where
we
can
integrate
partners-
and
this
is
why
this
distributed
cdn
thing
is
super
appealing
and
super
interesting
to
me
is
that
how
can
we
contribute
to
that
so
that
we
can?
We
can
help
develop
it
obviously,
but
we
can
also
integrate
and
offer
that,
as
as
our
own
network
starts,
to
expand
and
have
these
standards
that
we
can
all
work
to.
B
That
makes
the
like
that
developer
experience
really
easy
and
seamless
like
having
open
standards.
One
of
the
goals
that
our
ceo
has
written
in
our
mission
statement
is
an
open
standards.
Platform
is
eventually
what
we
want
to
have
so
that
amazing
enterprise
level
developer
experience
like
something
you'd
expect
from
a
google
or
something
like
that
or
an
aws,
but
it's
easy
to
use.
B
So
that
we
have
not
defined
our
own
apis,
our
own
standards,
our
own
ways
of
doing
things,
that
more
over
that
we've
collaborated
with
a
lot
of
people
to
help
evolve
the
new
standards
in
the
web,
3
and
decentralized
world.
So
that's
one
of
our
goals.
We
want
to
play
nice
and
contribute
and
give
back
in
this
arena,
so
we're
looking
to
launch
hopefully
soon
enough.
I
can't
give
away
details
yet
because
you
know
that's
my
product
manager's
role,
but
we've
been
working
on
this
for
the
last
six
months.
B
We
are
definitely
well
into
our
testing
phases
with
you
know:
beta
phases
and
testing
with
customers,
all
the
use
cases
work,
they're,
fantastic
and
we
just
need
to
make
the
product
richer
and
more
resilient
and
then
we're
building
out
our
own
infrastructure
like
the
central
infrastructure,
part
that
coordinates
the
jobs
and
orchestrates
the
jobs
and
then
we'll
focus
on
growing
our
like
our
web3
nerd
network
and
things
like
that
in
the
future
and
integrating
future
services.
B
My
background,
just
in
case
anyone's
wondering
why
I'm
here
doing
this
was
with
mesosphere
apache,
mesos
and
data
iq
kubernetes.
I
think
david.
You
know
a
bunch
of
people
over
there
and
before
that
I
was
at
iron
mountain
as
the
global
director
of
cloud
services
storing
a
lot
of
customer
data
in
about
14
physical
and
five
virtual
data
centers
around
the
world,
where
we
were
custodians
of
customers
data,
and
so
we
had
really
tight
regulations
around
banking
and
insurance
information.
There.
B
We
had
to
put
governance
on
top
of
these
documents
and
we
had
to
extract
information
from
the
documents
and
present
it
back
to
customers
in
a
digital
form,
so
offering
value.
Add
to
that
and
we
built
a
governance
system
around
that
around
the
world
as
well,
and
so
that
relates
really
well
to
being
good
data.
B
C
Daniel
for
those
that
aren't
aware
daniel
is
so
underplaying
what
what
he
did
like
the
the
level
of
scale
and
work
they
did
at
mesos
and
d2iq
was
outstanding.
So
we
couldn't
be
happier
to
have
him
part
of
thank.
A
Very
good,
thank
you
daniel.
We
appreciate
it
we'll
stay
close
as
the
road
map
developed,
so
we'll
keep
the
community
abreast
of
any
changes.
Thank
you.
So
we
can
shift
into
the
180
degree
protocol
folks.
I
know
we've
got
phil
and
parth
on
the
line.
We
can.
We
can
hand
it
over
to
you
guys
and
let
you
take
it
from
there.
D
That's
good
thanks,
wes
thanks
daniel
for
the
amazing
presentation.
Thank
you.
I'm
gonna
just
start
sharing
my
screen.
I
have
a
short
pitch
deck
and
then
I
have
a
demo
as
well,
so
I'm
just
going
to
carry
forward
with
those.
D
I
am
assuming
great
great
awesome
great,
so
it's
a
pleasure
to
be
presenting
to
you
guys.
We've
talked
to
some
of
you
before,
but
for
those
we
haven't,
we
are
180
protocol.
We
are
unlocking
the
value
in
sensitive
data
assets,
we're
all
about
privacy
and
introducing
privacy
and
making
a
huge
swath
of
data.
That's
underutilized
usable,
just
moving
forward.
So
the
problem
that
we're
trying
to
solve
is
that
there
is
a
vast
amount
of
data
that
is
produced
every
day
by
enterprises
and
users
alike.
D
That
is
lost
and
is
underutilized.
Data
is
created,
consumed
and
stored
at
an
alarming
rate
that
is
increasing,
but
the
cost
of
keeping
this
data
and
and
and
processing
and
computing
over
this
data
keeps
on
going
up.
So
there's
there's
fundamentally
a
a
need
for
compute
that
is
cheap
and
scalable.
D
Another
facet
of
this
problem
is
that
privacy
is
one
of
the
key
factors
that
prevents
enterprises
from
sharing
their
data.
Privacy
concerns
around
sensitive
data
assets,
fear
of
exposing
trade
secrets
and
simply
a
lack
of
knowledge
that
the
the
data,
the
sensitive
data
that
enterprises
have
and
even
consumers
is
of-
is
of
use
to
anyone.
So
there's
there's
the
problem
is
manifold.
There's
the
infrastructure
problem,
around
storage
costs-
and
you
know
those
going
up,
but
there's
also
the
coordination
and
the
the
trade
costs
that
you
know.
D
D
We
are
trying
to
solve
that
problem
and
how
we're
solving
it
is
through
three
components,
so
we
are
building
on
phi
coin.
We
we
use
phi
coin
as
a
storage
layer
phi
coin
is,
is,
is
relatively
cheap
compared
to
most
of
the
major
centralized
cloud
vendors
and
it
it
guarantees
storage
across
space
and
time.
On
top
of
that,
what
we've
built
is
a
compute
frame,
compute
framework
on
sensitive
data.
D
We
we
use
something
called
hardware
on
case
and
trusted
which
are
basically
black
boxes
that
can
take
an
encrypted
data
inputs
from
from
trust,
adapters,
decrypt
those
data
inputs
and
run
any
kind
of
computation
on
it.
We
also
have
the
notion
of
data
unions,
and
this
is
more
of
an
enterprise
geared
product,
but
we
we
want
to
expand
it
out
in
the
previous
slide.
I
mentioned
that
coordination
and
and
lack
of
awareness
that
sensitive
data
is
required
or
needed
by
some
counterparty
is
a
big
issue
and
data
unions.
D
We
think,
can
solve
that
problem,
because
with
data
unions,
you
can
have
a
private
permission
network
where
you
can
incentivize
the
right
kind
of
data
sharing.
By
codifying
what
is
good
and
bad
data
sharing,
and
you
can
reward
the
actors
that
share
the
right
kind
of
good
quality
data.
D
You
can
also
introduce
controls
around
around
the
data
flows
and
and
governance
around
what
schema
structures
of
data
are
allowed
to
be
computed
on.
So
currently,
we've
already
built
out
this
product.
The
data
union
product
we've
built
out
using
some
foundational
technologies
and
what
we've
used
is
a
private
permission
ledger
called
r3
coder.
D
We
use
that
simply
as
a
data
as
a
as
an
audit
and
trust
layer
to
record
decentralized
workflows
on
the
ledger.
We
we
use
something
called
conclave,
which
is
another
project
that
allows
for
interaction
with
hardware
enclaves
and
we're
supporting
other
such
messaging
frameworks,
namely
in
juna,
which
is
a
another,
more
flexible
te
messaging
framework.
D
D
How
project
is
is
something
that's
ongoing
and
and
there's
a
huge
push
towards
adding
value-added
services
in
the
whole
ecosystem,
and
we
being
proponents
of
privacy
and
privacy
of
data
violet
is
in
use
want
to
extend
that
functionality
of
our
or
the
knowledge
that
we
have
about
these
about
generally
about
privacy
to
to
to
buckle.
How
so,
that's
something
that
we're
still.
You
know
it's
it's
something
that's
in
in
the
concept
phase
and
we're
trying
to
to
promote
this,
but
we
want
to
introduce
private
computation
for
figuring.
D
Our
vision
is
that
if
five
coin
storage
providers
have
have
sensitive
data
assets
that
are
encrypted
and
stored
on
them,
then
they
should
be
able
to
run
compute
queries
on
encrypted
data.
Now
there
are
various
ways
to
achieve
that.
You
could
have
a
hardware
enclave,
you
could
have
homomorphic
encryption,
you
could
have
something
like
differential
privacy,
so
there's
many
different
kinds
of
privacy
techniques
and
what
we
want
to
do
is
figure
out.
D
So
to
summarize
our
offering
can
really,
you
know,
help
the
filecoin
network
solve
this
problem
of
privacy
or
and
compute
on
data
on
sensitive
data
while
in
use,
and
we
think
that
we
can
add
value.
I
quickly
want
to
talk
about
some
of
the
use
cases
that
we
think
from
our
lens
that
are
possible
both.
So
my
my
co-founder
phil
is
on
the
call
and,
and
one
of
our
other
teammates
are
both
of
them.
D
Help
me
make
the
slide,
and
what
we
really
wanted
to
get
across
in
the
slide
is
that
we
think
of
of
use
cases
for
decentralized
compute
in
a
very
different
way.
I
think
for
most
people,
I
think
one
of
the
major
things
that
was
never
possible,
that's
possible
with
decentralized
compute
is
the
notion
of
decentralized
marketplaces
and
in
any
marketplace.
You
have
the
notion
of
a
trade
exchange
of
value
you
and,
and
the
compute,
thus
emanating
from
decentralized
compute,
could
power
these
commercial
transactions
that
are
truly
decentralized.
D
You
could
have.
You
could
classify
these
use
cases
almost
into
free
trade
analytics
that
enable
that
transaction
to
occur
and
post
trade
analytics
that
that
occur
based
on
on
the
transaction
actually
happening
for
pre-trade
analytics,
you
could,
you
could
have
buyers
and
sellers
aggregating
their
their
supply
and
demand
for
assets
and
and
run
compute
queries
to
find
the
optimum
match.
D
You
could
have
commercial
discovery,
you
could
optimize
price
discovery,
and
once
you
have
a
match
based
on
this
decentralized
compute,
you
could
actually
even
have
exchange
of
goods
natively
happening
on
without
the
need
of
a
coordinating
central
actor.
So
I
think
the
use
cases
we're
thinking
of
are
quite
exciting,
they're
very
commercially
oriented,
and
we
really
want
to
bring
these
to
life.
D
That's
our
team
and
we
we're
all
three
very
passionate
individuals.
We
I'm
the
technologist
phil
is
the
ceo
and
sarah
is
our
intern.
Finally,
that's
our
team
of
advisors
and
yeah.
That's
us!
I'm
gonna,
stop
presenting!
D
A
D
Yeah
go
for
it,
okay,
so
just
to
set
the
scene,
the
demo
I'm
gonna
show
you
is
of
that
data
union
product
that
we
have.
So
what
I'm
gonna
be
presenting
to
you
is
a
I'm
gonna,
be
simulating
a
decentralized
network
with
four
different
entities
on
my
local
machine.
D
My
screen
again
just
bear
with
me
for
a
second
sorry
for
the
for
the
scary
terminal,
so
this
is
just
to
evidence
that
I'm
running
a
a
docker
compose
process.
On
my
on
my
local
machine,
I've
already
done
some
configuration
to
set
up
a
a
data
union
configuration
so
I've
defined
roles.
I've
defined
the
schema
of
the
data,
that's
that's
going
to
be
processed,
and
this
is
all
sort
of
a
configured
for
the
network.
D
I'm
going
to
start
now
with
the
first
data
provider,
so
I'm
logged
into
a
front-end
app,
that's
open
source
that
we've
developed.
It's
simply
querying
the
the
individual
nodes
that
that
I've
logged
in
as
so.
This
is
the
screen
for
a
data
provider.
What
you
have
is
rewards
for
data
that
you've
shared
over
time
that
you've
earned
for
sharing
that
sensitive
data.
You
can
manage
the
data
that
you've
shared
and
you
can
engage
in
governance
around
what
schemas
are
possible
or
what
what
kind
of
flows
should
be
available.
D
So
I'm
going
to
go
to
the
data
stream
and
I'm
going
to
select
a
data
calorie
right
now.
The
coalition
only
has
automotive
data
supported,
but
you
could
have
many
different
data
categories,
each
with
their
own
unique
schemas
and
unique
transformations.
So
there's
no
limit
to
that.
D
So
I'm
going
to
select
that
as
the
data
provider
then.
So
this
is
the
scope
of
the
ground
that
we've
been
working
on
with
phi
coin.
We
have
added
storage
of
encrypted
data
assets
on
pipeline
using
the
estuary
api,
so
I'm
gonna
select
that
as
the
storage
type,
I'm
gonna
at
this
time
generate
a
data
encryption
key.
The
data
encryption
key
is
is
is
generated
and
it
can
be
generated
per
unique
data
asset
or
or
you
can
use
the
same
one
for
multiple
data
assets.
D
This
data
key
itself
is
encrypted
again
by
a
key
encryption.
Key
that's
stored
on
an
hsm.
Currently
we
support
an
integration
with
azure
hsm,
but
we
want
to
actually
find
out
that
three
alternatives
to
that
as
well.
So,
finally,
I'm
going
to
choose
the
appropriate
file,
so
I
already
have
this
csv
that
I've
that
I've
configured
just
to
show
you
it's
not
a
big
data
set,
it's
a
it's.
D
So
I'm
going
to
select
the
csv
I'm
going
to
submit
it
at
this
point,
the
csv
is,
is
first
encrypted
and
then
actually
uploaded
onto
filecoin,
using
the
s3
api,
I'm
going
to
log
out
of
here
and
I'm
going
to
log
in
as
a
second
data
provider
which
is
gonna,
perform
a
very
similar
action.
The
ui
looks
very
similar
in
in
think
of
it
as
these
two
actors
performing
this
the
same
action
sort
of
in
a
in
an
asynchronous
way.
So
there's
no
need
for
this
to
be
done.
D
At
the
same
moment.
You
know
data
providers
can
keep
on
updating
their
their
data
directly
by
api,
so
they
don't
even
need
to
use
the
gui
they're,
all
very
api,
enabled.
Finally,
I'm
going
to
select
the
second
data,
the
schema
of
what
the
the
structure
of
both
the
files
is
the
same,
but
the
contents
can
vary.
D
Finally,
as
the
last
step,
I'm
going
to
log
in
as
the
data
consumer,
so
this
is
a
pull
first
model,
as
you
can,
as
you
can
imagine,
like,
the
the
the
providers
have
to
have
published
data
assets
that
are
already
uploaded
onto
cycle
and
the
data
consumer
is
is
basically
able
to
query
for
those
data
assets
at
any
given
point
of
time.
D
They
themselves
can
also
store
the
data
output
asset,
either
on
phycon
or
locally
on
their
relational
database,
going
through
the
same
step
to
to
generate
an
encryption
key
for
the
data
output
and
at
this
time
I'm
going
to
throw
it
on
filecoin
and
at
this
time
I
think
the
process
should
be
done.
So
you
can
see
that
the
host
node
is
printing
out
certain
log
messages.
D
It
received
the
inputs
from
the
first
provider
from
the
second
provider,
and
then
it
received
a
request
from
the
consumer,
and
then
it
was
able
to
apply
the
transformation
and
send
out
rewards
to
the
data
providers
for
sharing
their
data
at
this
stage.
What
I'll
do
is
I'll
refresh
my
screen,
and
you
can
see
that
the
the
the
data
consumer
is
able
to
see
the
output
of
the
calculation
and,
if
I
click
on
this,
it
should
actually
download
and
show
you
the
output
and
voila.
D
So
you
can
see
that
the
data
consumer
is
able
to
see
high
level
analytics
based
on
the
input
data,
so
it
does
not
see
individual
car
level
sales.
What
it
sees
is
aggregated
sort
of
you
know
analytics
per
car
model.
So
it's
not
seeing
how
you
know
to
what
country
each
model
3
was
sold.
It's
seeing
at
an
aggregate
from
each
of
the
data
providers.
What
was
the
average
price
across
all
countries
and
it's
able
to
see
you
know
the
unit
sold
and
the
total
sales.
D
D
So
you
may
remember
that
the
first
screen
when
I
logged
in
for
the
data
provider
was
blank,
but
now
you
can
see
that
there
is
actually
a
reward
state
that
they
they
own,
which
is
basically
weighing
the
data
quality
along
four
factors,
which
is
the
amount
provided
relative
to
the
other
data
provider.
The
completeness
in
terms
of
how
many
columns
are
provided
uniqueness
in
terms
of
how,
if
the
column
values
were
different
from
the
column,
values
provided
by
the
other
data
provider.
And
how
frequently
is
the
data
updated?
D
C
What
is
so
you
you
showed
like,
I
assume
you
know
as
you
think,
about
these
data
providers
wanting
to
highlight
their
stuff.
They
can
set
the
summarizations
right.
Yes,.
D
Correct
okay,
so
our
our
sdk
has
a
interface
that
you
can
override,
but
sorry,
I'm
trying
to
stop
that
and
I'm
struggling
with
it.
Oh
yeah,
it's
right
there
anyway!
So
what
to
answer
your
question?
They
can
easily
configure
just
by
overwriting
an
interface.
What
the
aggregation
mechanism
is.
B
It's
just
a
quick
question.
First
of
all,
it
looks
great:
are
there
standards
for
this
data
in
like
a
format
that
people
have
to
upload
to
do
you
define
that?
Do
they
does
the
market
there's
their
industry.
D
E
D
B
Thanks
and
looking
at
the
the
feedback
on
the
provider
node,
there
was
no
quality
reward.
You
tended
to
have
quantity.
Is
there
a
way
for
the
consumer
to
give
quality
feedback
on
the
data
they
receive
in
case,
you've
got
like
a
malicious
act
or
someone
just
gave
like
really
like
no
like
null
fields
or
zeros
and
stuff
like
that.
D
Yeah
as
a
very
accurate
question
and
one
we've
been
asked
by
many
different
people,
the
answer
is
that
initially
so,
for
individual
data
shares.
There's
no
way
to
know
that,
because
inherently
the
whole
premise
is
privacy.
So
the
only
time
the
data
is
decrypted
and
known
is
inside
the
hardware.
Only
so
you
can
have
checks
around
null
fields.
Those
are
easy,
but
you
can't
really
value
the
commercial
validity
of
the
data.
That
is
is
something
so
to
to
actually
address
that.
D
What
we
want
to
build
out
is
actually
a
a
a
weightage
factor
that
is
based
on
data
shared
over
time.
So
with
every
share.
If
the
data
provider
gets
a
score
over
time,
you
can
aggregate
those
scores
and
almost
have
an
uber
like
rating
for
the
data
provider
that
is
known
to
the
whole
community,
and
that
is
a
self-policing
mechanism
for
the
data
providers
to
act
in
a
good
way.
If
you
have
a
bad
rating
as
a
data
provider,
why
would
anyone
want
to
include
you
in
the
coalition?
D
A
B
E
It's
a
dinosaur,
sorry
to
weigh
in
here
so
we've
separately
experimented
around
self-optimizing
regression.
Models
which
stem
from
whatever
the
transactional
value
is
of
the
of
the
the
data
sharing
data
compute.
But
that's
something
that
we're
thinking
about
and
developing
it's
not
something
which
is
live
and
it's
something
in
the
pipeline
really.
B
E
Yeah,
if
you
look
at
our,
I
think
it's
on
our
wiki,
but
there
is
a
sort
of
a
technical
document
which
looks
at
this
problem
and
suggests
an
initial
solver
for
a
regression-based
model.
So
if
you
have
a
look
at
that
and
have
any
interest,
we're
very
happy
to
pick
it
up
with
you.
A
C
And
it
warms
up,
it
truly
does
warm.
My
heart
like
this
is
the
kind
of
interaction
and
discussion
that
I
want
to
like,
and
that
just
this
is
this
is
why
we
started
the
group
it.
You
know
it
does
bring
up
an
interesting
point.
You
know
one
one
thing
that
I
think
is
so
powerful
about
the
the
idea
of
the
the
working
group
is
is
having
the
common
repo
having
the
common
website.
Where
people
can
talk
about.
C
You
know
their
use
cases,
their
particular
use
cases
potentially
even
have
demos
and
things
like
that.
We
want
to
make
those
pages-
yours
not
to
supplant
your
pages,
but
so
when
people
are
coming
by
and
they
want
to
like
understand
how
to
engage
or
like
oh,
which
of
these
solutions
out
here.
Should
I
choose.
You
know
it's
all
kind
of
together.
C
One
question
I
have
for
you
is:
are
you
already
working
with
the
estuary
team
formally
like
and
if
so
or
if,
if,
if
not
we're
happy
to
connect
you,
if
so
like
love
to
simplify
it
or
whatever,
we
can
do
there.
D
We
have
access
to
them,
so
I
have
communicated
with
brenda
and
I
think
one
of
the
other
people,
so
I
have
access
to
them.
I'm
on
the
ecosystem,
dev
selection.
So
all
right,
yeah.
C
All
right
there
you
go
the
other
thing
that
we
should
all
work
on.
Is
you
know
really
one
of
the
things
that
you
know,
people
look
at
things
like
ipfs
and
file,
okay
and
they're
like
well.
You
know
this
is
great
technology.
Is
it
being
used?
And
then
you
know
it's
very,
very
easy
to
go
to
like
phil,
fox
or
anything
like
that
and
see
like
how
much
space
and
things
is
out
there.
It
might
be
interesting
for
us
to
work
together
on
a
whatever
shared
dashboard
or
something
like
that.
C
We
don't
have
to
reveal
anyone's
private
information
or
anything
you
don't
want,
but
like
the
extent
to
which
we
can
say
hey,
you
know
all
of
these
groups
together
are
interacting
with
whatever
100
terabytes
a
month
or
something
like
that,
just
to
give
customers
and
clients
or
customers
of
yours.
You
know
comfort
in
like
oh,
okay,
wow,
I'm
not
the
only
one.
B
Right,
like
super
simple,
single
one,
pager
and
start
there
and
then
start
to
break
it
out
and
get
more
detail
if
people
are
interested
yeah.
C
I
I
I'd
love
to
like
talk
about
how
we
lay
that
out.
I
want
to
be
like,
like,
like
I
said
again,
we're
not
the
center
point
here.
This
is
incredibly
neutral.
We
should
all
come
together,
but,
like
I
never
want
anyone
to
feel
like
again,
even
if
you
hear
somebody
else
do
a
demo
and
it
is
directly
in
competition
with
you.
It
is
my
opinion.
There
is
no
competition
at
this
point
right.
Our
competition
is
spark
and
hadoop
and
whatever
not
that
those
are
bad
either.
C
But
it's
just
like
that's
the
web,
one
in
web,
two
worlds
that
that
are
like
approaching
things
in
a
very
different
way,
and
you
know.
I
think
that
this
is
just
a
pie
that
we
all
grow.
So
I'm
with
you,
I
love
the
idea
of
a
map.
We
should
figure
out
exactly
how
this
looks.
So
no
one
feels
like
oh
you're,
forcing
me
to
trade
off
a
versus
b
or
like
somehow
we're
saying,
because
these
people
over
here
have
whatever
private
this
that's
better
than
not
private.
I
I
don't.
B
Have
you
all
seen
juan's.
B
C
We
should
share
it
wes
after
the
call
we'll
just
share
it
in
the
channel,
but
it's
basically
this
idea
that,
like
you
know
it's,
you
know
it's
a
modification
of
you
know
good
fast,
cheat
pick
two
like
it's
similar,
but
for
privacy
and
performance
and
security.
I
I
think
those
are
the
three.
C
You
can
only
pick
two
like
there's,
no
picking
three,
you
can
get
to
the
middle,
but
like
then
oh
there
it
is
so.
E
B
B
C
Yes,
so
with
that
in
mind,
I
think
it's
incredibly
important
for
us
to
develop
the
the
structures
and
the
patterns
right
now
to
help
each
other
right,
like
whatever
it
might
be
like
here's,
the
public
websites
or
here's,
how
we
talk
about
our
roadmap
or
here's.
How
we,
I
don't
know,
have
a
consulting
group
that
understands
all
of
our
technologies
and
can
help
customers
decide
if
you
don't
want
to
take
it
on
yourself,
all
those
kind
of
things.
C
There's,
there's
all
sorts
of
processes
and
structure
that
we
can
help
set
up,
as
it
makes
sense
again
going
back
to
the
cncf
world.
You
know
there
were
people
could
could
list
themselves
as
certain
service
providers
right.
That's
that
that's
kind
of
outside
of
of
what
we
are,
and
you
know
that
all
that
kind
of
good
stuff.
B
C
Okay,
so
we're
we're
about
nine
we're
about
to
9
40.
west.
Did
we
have
another
demo
or
like
that?
Was
it?
We
have
two
demos
today.
A
That's
all
for
for
the
demos
today
and
then,
if
there's
anything
else
david
that
we
need
yeah.
C
We
will
put
it
up,
please
let
us
know
what
what
things
wes
will
very
aggressively
move
this
into
a
public
repo,
so
people
can
just
do
pr's
or
do
whatever
they
like
and
we'll
have
documentation
on
what
to
do
around
that.
The
second
thing
that
I
want
to
talk
about
is
some
standards
very,
very
lowercase
standard,
like
subscript
s,
standard
around
common
things
that
we
might
need.
So
one
thing
you
know,
I'm
very
passionate
about
is,
for
example,
wasm,
and
I
think
that
it's
going
to
be.
C
You
know
quite
transformative,
particularly
in
web
3.,
but
we
are
thinking
about
what
would
it
look
like
to
have
standards
for
how
you
interact
with
systems
across
the
board,
whether
or
not
it's
wasm
or
docker,
or
straight
binaries
or
whatever?
It
might
be.
C
One
thing
that
we're
talking
about
right
now
like,
for
example,
an
example
of
this
not
think.
One
thing
that
we're
talking
about
is
the
idea
of
like,
for
example,
an
invocation
model
right
like
what
would
it
look
like
for
you
to
execute
against
some
back
end?
Could
we
have
a
you
know,
like
I
said,
lowercase
s
standard
around
that
that's
something
that
we're
thinking.
A
C
Right
now,
and
when
I
say
we,
I
don't
mean
like
protocol
labs,
this
is
just
a
group
of
people.
I've
talked
to
several
people
in
the
community
who
were
on
the
call
from
last
time
informally,
and
you
know
we'd
love
to
talk
about
what
a
standard
might
look
like
for
that,
such
that
you
could
say.
Oh
you
know
today
I
was
operating
on
provider
a
I
want
to
take
that
and
execute
against
provider
b,
because
they
have
a
different
thing
that
I
want.
C
You
know
different
part
of
the
triangle
that
they
specialize
in.
What
would
that?
What
might
that
look
like,
and
so
things
like
that.
B
I've
been
thinking
about
this
myself
like
we
will
build
our
own
cli.
We
have
to.
We
know
that
absolutely
interact,
but
as
part
of
that
any
service
we're
building
comes
with
an
sdk
and
then
I
started
thinking
broader,
like
a
cdk
right
like
a
cloud
development
kit,
absolutely-
and
I
think
at
that
point
we
could
work
together
to
contribute
each
other's
parts
to
a
collective
cloud
development
kit.
B
In
a
repo.
Where
say,
hey
look,
you
know.
Here's
storage
and
I
can
contribute
this
and
this
other
provider
can
do
their
part
on
their
storage,
and
this
is
all
their
their
command
protocols
and
standards.
And
then
someone
else
has
got
some
like
like
with
a
path
solution
there
like.
I
need
to
pull
data
from
source
x
and
y
and
put
it
in
the
storage
and
that's
all
they're
ready
to
consume
in
a
standard
that's
defined
by
us.
I
guess
you
could.
C
Yep
yeah
yeah,
so
I
couldn't
agree
more
and
so
it
it
I'm
I'm
on
the
hook
for
a
oh
thank
you,
I'm
on
the
hook
for
starting
to
pull
these
together
just
into
a
single
central
artifact
that
everyone
can
contribute
to.
My
request,
for
all
of
you
is
nothing
more
than
if
you
already
have
invocation
specs,
whatever
they
may
be,
could
just
be
a
cli
could
be
you
know
some
serialized
format.
I
don't
care
what,
but
just
think
like.
I
have
this
a
customer.
C
A
user
wants
to
to
kick
off
some
execution
on
my
platform.
What
does
that
look
like
I'm
just
collecting
those
in
a
single
document,
and
then
we
can
start
joining
together
and
say
like
okay.
Well,
we
think
here
are
all
the
various
things
so
pass
it
my
way
again,
this
will
all
take
place
in
public
and
we'll
go
from
there.
D
On
the
vasem
part
david,
that's
a
very
apt
point.
I
think
webassembly
I
agree
with
you
is
the
de
facto
I
mean
it's
a
very
cool
name
right
assembly
for
the
web
and
I
think
it
has
immense
traction
and
even
in
confidential,
the
confidential
computing
space.
The
adoption
of
web
assembly
is
growing,
so
the
I
just
posted
a
projects
link
on
on
the
chat,
it's
called
anarchs
and
they
are
actually
trying
to
get
web
assembly-based
run
times
inside
a
enclave.
D
C
Yep
totally
agree
all
right
so
we'll
list
all
this
in
here
I'll
share
the
document.
It's
just
an
outline
form
right
now.
You
know
your
all-time
box
is
no
more
than
one.
C
Look
at
the
thing,
because
it's
that's
the
level
of
quality
it's
at
right
now,
but
we're
trying
to
probe
at
you
know
all
the
edge
cases
and
at
the
start
is
just
collecting
a
bunch
of
you
know
other
invocation
models
and
and
so
that
we
can
be
informed
as
we
get
to
production.