►
From YouTube: The Graph - Core Devs Meeting #9
Description
The Graph’s Core Devs Meeting #9
This video was recorded: Wednesday, December 8 @ 8am PST, 2021.
0:00 Intro
1:35 Edge & Node Updates
8:55 Figment Updates
11:35 StreamingFast Updates
14:45 The Guild Updates
17:35 Matchstick
23:13 EIP 4444s
35:35 Semiotic Updates
42:00 Solana Discussion
The Graph's Media:
Twitter: https://twitter.com/graphprotocol?s=20
Instagram: https://instagram.com/graphprotocol
LinkedIn: https://www.linkedin.com/company/theg...
Website: https://thegraph.com
A
A
Before
we
hear
from
core
devs
on
the
latest
updates,
I
just
wanted
to
share
what
we're
in
that
we're
in
the
process
of
revamping
not
only
these
calls,
but
also
how
we
share
all
of
these
core
r
d
updates,
with
all
of
you
guys
so
moving
forward
you'll
have
access
to
recorded
like
idoc
meetings,
the
different
teams
are
are
having,
as
well
as
what
the
canonical
roadmap
looks
like
and
the
different
formalized
working
groups
we
have
here.
A
So
we
will
be
organizing
all
of
these
using
the
foundations
notion
workspace,
so
it
should
all
be
fairly
accessible
to
all
of
you.
This
monthly
call
specifically
will
then
be
used
as
a
way
to
share
and
discuss
updates
as
we'll
do
today.
Those
will
be
coming
from
different
working
groups,
as
major
milestones
are
hit,
basically,
so
do
stay
on
the
lookout
for
future
updates,
we'll
post
everything
into
the
forum
cool.
A
B
Yeah,
of
course,
I'll
probably
share
my
screen
just
to
walk
you
through
the
notes
I
have
here
yeah,
so
I'll
dive
right
in
we
shipped
recently
a
new
rust
gateway
that
replaces
the
old
one
that
was
written
in
typescript.
B
It
has
much
better
throughput,
so
it
should
be
able
to
carry
at
the
very
at
least
hosted
service.
You
know
amount
of
amounts
of
volumes
and
you've
all
seen
the
charts
on
twitter
about
the
traffic
we're
seeing
there,
and
so
this
should
be
able
to
handle
that
easily.
B
Also,
there
have
been
a
number
of
index
selection,
reliability
improvements,
so
you
know
primarily
to
improve
the
client
or
consumer
experience
when,
for
instance,
there's
faulty
indexers,
and
then
we
fall
back
to
others
and
in
better
ways
and
that
we
also
score
them
more
dynamically.
So
we
react
more
more
more
quickly
to
changes
in
the
environment.
We've
also
been
new
graph,
node
releases
and
new
index
releases.
B
And
especially
yeah,
both
of
those
really
are
super
important
and
if
there's
any
indexes
on
the
call
today,
I
urge
you
to
update
your
indexer
service
at
least
and
graph
node.
I
think
the
next
asian
the
same
way,
the
biggest
change
that
that
I
personally
I'm
I'm
excited
about
and
interested
in
is
the
gip
20.
B
This
is
untestable
index
responses
has
been
released
as
part
of
these
versions
and
that's
why
it's
so
important
to
update,
because
this
is
a
way
that
we
can
in
the
gateway
detect
when
certain
indexes
have
issues
that
are,
for
instance,
due
to
a
corrupt
database
or
something
else,
and
if
we
can
detect
this,
that
means
a
and
the
indexer
doesn't
it
doesn't
get
used
for
that
particular
query
result
and
isn't
on
the
hook,
for
you
know,
attestations
for
problematic
results
that
they
themselves
probably
don't
have
confidence
in,
but
don't
know
that
something's
up
and
also
means
that
we
can
detect
these
from
the
consumer
perspective
and
fall
back
to
other
indexes
and
who,
hopefully,
don't
have
the
same
corrupt
database
right.
B
So
these
are
these.
Are
this
catching
errors
that
are
specific
to
individual
indexers?
You
know
are
not
part
of.
What's
expected,
I
know
sort
of
like
query
syntax
errors
that
graph
node
detects
or
something
like
that
or
like
schema
mismatch
errors,
but
it's
really
errors
that
result
from
something
being
broken
on
the
indexer,
and
so
this
is
super
important
because
it
also
increases
the
reliability
upon
in
in
the
network.
So
yes,
please,
please
update
and
please
read
the
the
release
notes.
B
They
will
have
a
lot
of
information.
What's
currently
in
progress
on
edgy
node
side
is
that
we
are
planning
what
we
call
fire
data
sources.
There
is
a
gip
out
for
that.
I
believe
in
the
forum
as
well,
so
this
is
essentially
replacing
what
was
previously
ipfs
cat.
Basically,
so
it's
supposed
to
support
file
storage
networks
like
ipfs
or
rwev,
and
basically
segregates
out
the
files
that
we
fetch
from
those
networks
as
part
of
subgraph
indexing
from
data
sources,
like
you
know,
ethereum
contracts
or
near
contracts.
B
So
this
is
being
planned.
It's
a
pretty
invasive
feature
and
it's
been
in
the
on
the
pipeline
for
a
long
time
and
we've
wrecked
our
brains
on
how
to
solve
this,
and
hopefully
this
this
time
around.
We'll,
actually,
you
know
make
it
happen.
B
B
The
first
ones
that
we
will
be
working
on
most
likely
are
on
the
trigger
side,
pipelining,
the
block
stream,
so
pipelining
the
filtering
for
relevant
triggers,
and
so
that
we
don't
do
that
in
sequences
or
block
ranges.
But
we,
you
know
pipeline
those.
Those
block
ranges,
there's
also
work
on
going
to
make
the
fire
hose
work
for
ethereum
or
make
it
work
well
for
ethereum.
B
So,
specifically,
one
area
that
we're
looking
into
is
filtering
the
pre-filtering
information
that
the
firehouse
passes
to
graph
node
in
on
the
firearms
side
and
providing
those
filters
and
that
are
subgraph
specific
on
from
graph
node
and
to
just
make
sparse
subgraphs
or
block
ranges
where
there
is
no
information
and
just
you
know,
faster
and
not
pass
all
the
blocks
over
to
graph
node
and
then
on
the
store
side.
B
We're
looking
into
and
pipelining
writing
entity
data
back
to
the
store
just
so
that
when
you
are
done
with
the
block
and
you
you
want
to
write
the
entity,
changes
that
we're
not
that
doesn't
block
processing
and
we
can
continue
indexing
additional
blocks.
While
we
are
also
queuing,
the
writes
up,
there's
also
an
investigation
into
using
copy
versus
insert
statements
which
supposedly
faster,
I
think
newer.
B
Postgres
releases
also
have
a
built-in
pipelining
mechanism,
and
so
we're
looking
at
a
few
of
those
kind
of
improvements,
slightly
bigger
feature
that
we're
pondering
to
introduce
are
immutable
entities.
A
lot
of
cases.
You
know
where
you,
for
instance,
have
transfers
or
something
you
create
a
transfer
entity
and
that
never
changes.
B
So
that
would
make
a
lot
of
things
simpler
and
faster
in
graph
node,
when
you,
when
you
know
the
entities,
never
change,
yeah
and
last
but
not
least,
also
started
working
on
previously
talked
about
as
of
integration,
testing
or
or
proof,
indexer,
cross-checking
or
dispute
analysis.
This
is
now
kind
of
all
rolled
into
one.
One
effort
which
I
think
like
indexer
crosschecking,
is
a
bit
more
appropriate
for
so
basically
have
a
way.
A
system
to
continuously
cross-check
indexing
results
and
query
results
across
different
environments,
and
those
could
be
indexes.
B
B
Data
from
endings
and
subgraphs
could
also
be
run
in
a
network
mode
where
all
the
indexes
in
the
network
are
cross-checked
or
it
could
run
in
a
peer-to-peer
mode
that
different
indexes
can
run
and
they
can
configure
other
indexers
to
collaborate
with
like
get
detailed
information
from
to
help
debugging
or
like
detecting
discrepancies
in
the
data
on
the
query,
results
and
just
to
give
them
more
confidence
in
that
everything
is
running
fine
on
their
side.
B
So
this
work
has
also
started
and
we'll
share
more
information
about
that.
You
know
pretty
soon.
I
think
that's
probably
not
a
completeness,
but
this
is
there's
an
update
from
edge
note
perfect
thanks.
A
Dennis
right
on
time,
if
you
have
five
minutes
for
each
that
would
be
yep.
That
would
be
great
ass
figment
joseph.
Are
you
on
the
call
I'm
gonna.
C
All
right
so
yeah,
I'm
gonna,
start
with
some
challenges
right
now
that
we
were
having
the
first
one
is
around
the
performance
for
syncing
all
their
blog
heights,
and
this
is
something
right
now.
It's
current
like
a
task
on
the
research,
so
we
have
it
here
under
challenge,
but
at
the
same
time
we
have
it
on
the
next
steps.
So
we
are
currently
trying
to
find
some
strategies
and
doing
some
research
about
how
we
could
actually
increase
the
performance.
C
I
think
this
is
going
to
be
like
a
room
for
collaboration
with
the
other
kordev
teams
as
well,
and
we
faced
some
issues
related
to
the
merger
and
we
created
a
tool
actually
to
verify
the
integrity
of
this
merge
file.
So
we
have
it
here
achievements,
so
this
is
like
a
challenge
that
was
really
resolved
and
so
for
everyone
here
on
the
score.
Everything
that
we're
doing
right
now
is
around
integrating
tenderman.
So
we
are
close
to
the
final
steps
right
now.
C
So
there's
a
lot
of
finalization
and
some
bug
test
the
fixing
and
testing
and
on
the
active
side
we
are
implementing
right
now,
the
tendermint
within
the
graph
cli
and
wrapping
the
last
pieces
of
the
extractor
package
and
those
are
like
the
few
less
steps
for
integrating
tendermint
and
we
were
able
to
finish
the
event
trigger
refactoring
and,
as
I
said,
we
fixed
a
lot
of
bugs
and
improved
the
and
get
some
updates
on
the
investor
stack
and
we
are
running
right
now.
The
entire
multi-node
setup.
That's
the
ingester
relayer
merger
fire
halls
everything.
C
The
plan
is
to
keep
it
running
for
two
weeks
now
to
be
able
to
test
it
and
check
if
there's
something
malfunctioning
and
if
we
run
into
some
problems
and
yeah.
That's
it
like
for
for
everything
that
we're
doing
currently
and
the
next
step
is
actually
the
deployment
of
the
firehose
tag,
keep
testing
it
and
create
a
full
full
subgraph
which
we're
going
to
be
using
actually
also
for
our
testings.
A
D
So
so,
let's
do
that,
so
we
have
done
a
lot
of
work
on
solana
in
the
past.
You,
you
guys
know,
there's
been
a
release
of
the
near
stuff.
This
is
continuing
a
little
bit
processing
the
net,
the
test
net.
We
put
a
pr
out
to
update
the
latest
data
scheme
up
there,
so
near
should
be
more
advanced
and
ready
to
be
here.
You
know
more
use,
so
everyone
out
there
if
you
want
to
have
near
indexing,
try
it
out
go
ahead.
Now
is
the
time
we've
done.
D
A
lot
of
solana
work,
but
also
we're
continuing
that
work
got
more
hope,
there's
a
lot
of
things
that
need
to
be
managed
there,
and
we
might
have
some
discussions
today.
If
we
have
the
chance
about.
You
know
how
we're
going
to
share
that.
If
no
everyone
wants
to
synchronize
from
genesis,
which
is
pretty
hard,
that's
one
thing.
I
would
like
also
to
call
to
all
the
people
who
are
indexers
that
have
not
yet
spun
up
as
fire
hose.
D
Please
do
if
you
want
to
be
the
kingpin
in
the
graph
run,
a
fire
hose
and
start.
You
know
start
doing
that
work.
So
we
can
accelerate
know
the
the
performance
improvements
downstreams
or
for
sub
graph
developers
get
involved
in
that
that's
pretty
cool
and
you're
gonna
be
really
highlighted.
We're
going
to
have
a
little
board
there
with
your
face
or
something
when
you,
when
you're
one
of
some
of
the
first
to
do
that,
we've
been
tackling
some
bsc
issues,
a
lot
related
to
the
node.
So
that's
been
annoying.
D
You
know
people
confusing
the
graph
sub
graphs
and
the
nodes.
Hopefully
we
can
bring
the
firehose
for
bsc
soon,
so
we
get
that
performance
boost
that
comes
from
having
it.
D
So,
if
you're
interested
to
be
a
more
kingpin,
do
that
with
bsc
satisfied
with
the
demand
we're
working
right
now
on
a
lot
of
improvements
to
the
like,
like
janis,
told
us
about
the
performance
and
the
sparseness
of
fossil
we're
implementing,
transforms
right
now
for
solana,
which
is
required
and
then
ingraining
that
for
acceleration
of
sparser
queries
or
larger
speed
filtering
on
whole
histories.
D
So
that's
approximately
it
right,
oh
yeah,
so
the
team
is
also
working
to
make
sure
that
call
handlers
on
ethereum
work
well
with
the
firehose
and
doing
some
comparison
stuff.
So
that's
going
to
be
cool.
Okay!
I
don't
want
to
take
too
much
time
here.
We've
done
a
little
bit
of
a
collaboration
with
figment,
which
I'm
really
happy
we're
going
to
continue
on
doing
and
and
reviewing
the
the
gips
coming
out
of
there.
The
tendermint
work.
A
Yeah
yeah
amazing.
We
might
have
some
time
to
properly
discuss
that
today.
So
cool
thanks,
alex
okay,
yeah,
there's
a
lot
of
teams
there,
I'm
counting
seven
on
this
call,
which
is
crazy,
and
we
don't
have
everybody
here.
So
I'd
like
to
take
also
the
time
to
shout
out
to
the
guild.
Okay,
also
with
a
new
team,
that's
been
actively
working
on
subgroup
features.
A
E
Yeah
sure
so
I'll
start
with
a
short
introduction,
so
I'm
botan,
I'm
a
member
of
a
group
of
developers,
called
the
guild.
E
We're
a
group
of
open
source
developers
mainly
focused
around
graphql,
we're
also
part
of
the
graphql
foundation,
we're
maintaining
the
reference
implementation
of
graphql,
and
we
help
with
the
maintenance
of
the
graphql
spec,
we're
also
working
on
like
tools
around
graphql
like
graphicals
and
a
few
more
and
we're
building
tons
of
open
source
around
graphql
like
graphical
mesh,
graphical
cogen,
inspectorius,
leans,
tons
of
more
tools,
yeah
we're
super
happy
to
join
and
be
a
part
of
this
huge
thing
we
just
joined
like
a
few
officially
like
a
few
days
ago,
just
the
contract
signed
yeah.
E
I
won't
take
much
time.
I
have
a
very
quick
update
on
what
we're
doing
we're
taking
like
baby
steps
on
learning
graph,
node
and
everything
related
to
it.
We're
trying
to
compare
and
learn
from
the
implementation
based
on
our
experience
and
knowledge
from
the
graphql
ecosystem.
That
is
not
rust,
specific
yeah.
I
have
a
few
more
here
like
the
few
more
updates,
the
biggest
thing
we're
going
to
work
on
soon.
I
guess
would
be
api,
versioning
and
composition
of
sub-graphs
yeah.
E
That's
all
from
super
excited
to
be
part
of
part
of
the
theme.
Thank
you
better.
A
Amazing,
thanks
dotan.
It's
good
to
know.
We
have
a
new
team
focused
on
graphql
stuff.
These
guys
know
what
they're
building
for
sure.
So
graphic
position
will
be
a
major
milestone,
hopefully
in
the
upcoming
months,
cool
thanks
a
lot.
Lastly,
we
also
have
lime
chain.
You
know
live
chat.
Lime
chain
has
been
working
very
closely
with
us.
There's
a
lot
of
new
tools
coming
out.
Specifically,
we
have
a
new
one
called
the
subgraph
debug
tool.
I
don't
think
many
core
devs
know
exactly
what
this
tool
does.
A
So
might
be
a
good
time
to
talk
a
little
bit
about
this.
Do
we
have,
I
think
I
saw
yulia
on
the
chat
or
petco.
I
don't
know
guys.
Can
you
update
us.
F
Yes,
perfectly
hi,
pixel
cool
yeah
zoom
doesn't
seem
to
give
me
feedback
on
the
sound
yeah.
So
basically
we
we
are
the
team
behind
the
matchstick
unit
testing
framework,
and
I
think
I
spoke
on
the
last
core
dev
meeting
about
matchstick
a
bit
but
yeah.
What's
what's
really
what's
sort
of
the
big
news
right
now
is
this
debug
tool
that
was
mentioned?
It's
it's
also
called
a
subgraph
forking
tool.
F
Unfortunately,
our
teammate,
the
developer,
who
was
mostly
involved
with
integrating
this
into
graph
note,
is
not
here,
but
I
can
just
give
a
brief
explanation.
So
the
whole
idea
behind
the
debug
tool
was
that
really
often
a
graph
will
fail
for
some
reason.
After
it's
been
deployed
and
after
it's
synced
a
few
million
blocks,
then
at
some
point
it
will
fail
and
in
order
to
actually
check,
if
so,
in
order
to
get
the
subgraph
up
and
running
again,
you
would
need
to
fix
what
you
think
is
the
issue.
F
If
you
have
the
right
logs
and
then
you
would
deploy
again
as
a
subgraph
developer,
and
you
would
hope
that
you
had
fixed
the
issue,
but
then,
of
course
the
subgraph
usually
starts
syncing
from
the
beginning,
so
from
the
block,
zero,
which
is
not
really
good,
because
it
can
take
a
really
long
time
to
actually
see
if
your
fix
has
done
the
job.
So
what
this
subgraph
debug
tool
will
allow
the
supergraph
developers
to
do
is
it
will
basically
run
their
mappings
using
the
store
from
the
failed
subgraph.
F
So
essentially,
it
will
just
run
locally,
just
one
block,
just
the
block
that
the
that
failed
on
and
it
will
use
and
it
will
use
the
store-
that's
already
in
the
failed
subgraph
store
right,
because
the
subgraph
was
syncing
just
fine
until
now,
and
it
is
the
most
recent
block,
that's
that
actually
is
causing
the
issue
so
that
way,
supergraph
developers
can
get
feedback
on
if
they've
managed
to
fix
the
issue
really
fast.
F
So
that's
the
main
thing
that
we've
done,
the
debug
tool
has
been
finalized
and
it
is
actually
part
of
graph
node,
and
I
can
maybe
share
here
in
the
chat
or
on
the
discord,
some
more
information
about
that.
Once
I'm
done
yeah.
Otherwise
there
will
be
a
new
version
of
mapstick
releasing
in
the
next
few
days,
which
has
a
lot
of
bug,
fixes
that
have
come
up
in
the
discord
channel.
Also,
a
lot
of
new
and
useful
features
that
we
will
describe
in
the
release
notes.
F
I
guess
one
of
the
one
of
the
more
important
new
features
is
that
matchstick
will
now
have
first
glass
support
for
docker,
because
initially
we
tried
going
with
the
approach
of
having
binaries
for
a
lot
of
operating
systems
already
generated
and
so
that
the
subgraph
developers
can
use
the
compiled
binaries
when
running
matchstick.
But
that
didn't
turn
out
to
be
the
best
idea,
because
a
lot
of
subgraph
developers
reported
issues
locally.
So
that's
why
we
think
that
the
best
way
moving
forward
is
to
actually
use
docker
right
out
of
the
box.
F
Of
course,
if
the
subgraph
developer
wants
to
do
so,
they
can.
They
have
three
options:
basically
using
the
compiled
binary
using
docker
and
building
matchstick
for
themselves
locally
and
then
using
that
binary
cool.
So
yeah,
I
guess
that's
the
that's
the
most
important
thing
and
in
the
future
we
will
be
researching
a
cardhat
plugin.
But
I
cannot
give
more
information
about
that
since
it's
very
early
stages
for
now
and
of
course,
we
will
be
working
on
stabilizing
more
features
in
matchstick,
but
yeah,
that's
basically
it.
A
Okay,
great
thanks.
All
I
have
also
like
you've
seen
also
semiotic,
building
cool
stuff.
We
might
want
to
talk
about
it
but
later
later
on.
Maybe
we
just
need
to
speed
up
based
on
the
agenda.
A
We
have
and
I
think
we're
going
to
talk
about
something-
that's
really
related
to
what
somebody
has
been
building
so
like
we've
recently
had
we
really
we've
recently
seen
this
eip
for
first
proposal,
there's
something
hot
in
ethereum
right
now.
This
proposal
effectively
adds
historical
pruning
to
ethereum
clients,
and
you
probably
have
seen
vitalik
mentioning
the
graph
as
one
alternative
to
ethereum
archive
nodes
serving
such
historical
data.
A
Can
we
talk
about
this
and
I
know
zach
and
others
have
written
a
great
blog
post
about
it?
I
can.
I
can
lick
it
here
as
well,
but
yeah,
I
think
we
have
zach
on.
The
call
might
be
a
good
time
to
talk
about
this.
G
Yes,
so
yeah
before
fours
is
a
very
exciting
opportunity
for
for
the
graph
ecosystem
as
a
whole
and
then
the
quartet.
What
it
is
is
that
I
can
hear
someone
typing
is
that
you
pedro
okay,
thanks
as
as
a
part
of
the
push
for
ethereum
2.0
developers.
G
Sorry,
I
got
my
my
notes:
just
disappeared
behind
zoom
and
I'm
I'm
lost
just
give
me
one
second,
for
technical
difficulties.
All
right!
Thank
you.
So
as
a
part
of
the
push
for
ethereum,
2.0
eip44s
would
add
pruning
historical
data
in
ethereum
clients
and
the
problem
that
it's
it's
there
to
address
is
that
the
state
required
to
run
a
client
to
verify
the
chain
is
currently
over
400
gigabytes
and
it's
getting
larger
all
of
the
time.
G
So,
typically,
if
you
want
to
run
a
a
light,
client
or
well
a
client
that
is
that
verifies
the
chain,
it
would
require
a
one
terabyte
disk
to
run
and
a
part
of
the
ethereum
ethos
is
that
anyone
should
be
able
to
validate
the
chain
on
consumer
grade
hardware.
The
way
things
are
looking
they're
soon
going
to
be
pushing
the
limits
of
what
could
be
considered
to
be
consumer
grade
right.
A
one
terabyte
disc
is
not
something
that
everyone
has.
G
G
This
turns
out
to
be
more
than
half
of
the
state
eligible
for
pruning,
so
it
would
reduce
the
hardware
requirements
from
a
one
terabyte
disk
to
a
500
megabyte
disk
in
in
the
short
time
and
probably
ongoing
due
to
other
other
reasons,
and
this
ties
back
into
ethereum
2.0
because
they
want
to
be
sinking
from
something
called
a
weak
subjectivity
checkpoint
instead
of
sinking
from
genesis
all
the
time
which
is
like
a
fast
forwarded
part
of
history,
it's
kind
of
the
same
thing
that
we
talked
about.
G
We
talked
about
warp
sync
for
sub
graphs
and
using
weak
subjectivity.
Checkpoints
is
required
for
the
security
model
and
proof
of
stake
and
an
ethereum
2.0.
So
what
does
that
have
to
do
with
the
graph?
Once
ethereum
clients
no
longer
store
historical
date?
That's
older
than
one
year.
They
will
also
be
unable
to
serve
queries
for
that
state,
and
the
prominent
voices
excuse
me
in
the
ethereum
community,
including
vitalik
himself,
have
publicly
endorsed
the
graph
as
a
replacement
for
the
json
rpc
api
in
ethereum
to
query
this
historical
data.
G
In
fact,
the
graph
is
mentioned
even
in
the
text
of
eip44's
proposal.
G
For
for
this
purpose,
so
there
there
are
two
things
that
would
need
to
happen,
though,
for
d
app
developers
that
are
still
relying
on
the
json
rbc
api
to
be
able
to
have
a
smooth
transition
over
to
using
the
graph
to
serve
that
data.
G
The
first
thing
that
we
need
is
something
called
the
ethereum
network
subgraph
and
someone
in
the
in
the
community
would
need
to
build
this.
Maybe
a
core
dev,
maybe
a
grant
or
whoever.
But
what
that
would
be
is
instead
of
building
a
subgraph
for
the
specific
purposes
of
a
d
app,
we
would
build
a
subgraph
that
is
able
to
index
all
of
the
sort
of
data
relevant
to
the
ethereum
chain
in
an
agnostic
way
and
thereby
be
able
to
mimic
the
json
rpc
api.
G
So
you
would
be
able
to
query
things
like
logs
or
account
balances
or
receipts,
or
all
that
information
it
might
require
some
some
updates
to
graph
node
in
order
to
be
able
to
support
this.
I'm
not
sure
I
know
that's
something
that
we
looked
at
a
long
time
ago,
but
we
need
to
dig
that
out
again
and
and
pick
that
back
up.
The
second
thing
that
needs
to
happen
is
to
keep
the
trust
model
the
same
for
the
d
app
developer.
G
We
would
need
to
move
our
verifiability
story
to
a
one
of
n
model.
That's
the
same
trust
model
that
a
consumer
would
rely
on
for
ensuring
the
liveness
of
light
clients.
So
we
need
to
have
that
same
level
of
trust
for
all
of
the
app
developers
to
feel
comfortable
migrating
from
ethereum
to
the
graph,
both
for
the
whole
story
of
it,
verifiable
indexing
and
verifiable
queries.
G
We
don't
necessarily
need
all
of
our
feature
set
covered
by
one
event:
trust
model
just
yet,
but
at
least
everything
that
would
be
used
by
the
ethereum
network
subgraph
and
to
to
be
able
to
mimic
the
same
queries
so
at
least
say
verifiable
index.
Verifiable
queries
for
like
get
by
id
would
be
useful
for
things
like
account
balances,
but
not
necessarily
features
like
wear
and
skip
and-
and
you
know
more
complex
features
all
right.
G
G
Basically,
in
order
to
allow
us
to
both
develop
quickly
and
have
high
levels
of
security,
we
move
different
features
through
a
pipeline
of
verifiability
stages,
where
they
start
with
higher
levels
of
trust
and
then
end
on
validity
proofs,
which
require
basically
no
trust
at
all.
If
you
have
a
trusted
block
hash,
then
you
can
get
all
of
the
other
information
that
you
care
about
from
the
indexer.
So
you
don't
have
to
trust
anyone.
G
Those
stages
are
experimental
arbitration
fraud,
proofs
and
validity
proofs
and
in
the
the
first
stage,
experimental
is
generally
reserved
for
features
that
are
not
fully
implemented.
Maybe
their
their
api
is
not
fleshed
out
yet
or
there's
debate
around
what
the
right
api
should
be:
they're,
they're,
sort
of
unstable,
or
maybe
they
have
a
outsized
difficulty
and
being
able
to
make
the
feature
deterministic.
G
So
things
like
full
text,
search
or
ipfs
data
sources,
or
rather
file
data
sources
fall
into
that
category
and
then
a
feature
that
is
in
the
experimental
stage
cannot
have
its
security
guarantees
enforced
by
the
protocol,
because
it's
not
deterministic
or
what
have
you,
so
you
can't
say,
compare
attestations.
G
It's
not
where
we
want
any
feature
to
be.
Ideally,
we're
moving
things
quickly
to
arbitration
and
in
fact,
most
features
are
in
arbitration
stage
today.
The
arbitration
stage
is
when
indexers
sign
attestations
for
things.
This
is
query,
attestations
and
it's
proofs
of
indexing.
They
sign
these
commitments
to
data
which
can
be
brought
to
the
chain
and
shown
in
a
dispute,
and
then
the
the
arbitrators
which
are
elected
by
the
graf
council
will
then
verify
that
that
work
has
been
done
correctly
and
their
penalties
of
slashing
for
not
doing
the
work
correctly.
G
So
that's
a
governance
model
you
have
to
you
have
to
rely
on
the
arbitrators
basically
to
be
active
and
anonymous
participants
from
there.
We
would
like
to
move
to
fraud,
proofs
and
fraud.
Proofs
are,
are
the
one
of
n
model
where,
instead
of
necessarily
relying
on
governance,
any
participant,
if
there's
at
least
one
participant
in
the
network
that
is
active
and
honest,
then
they
can
force
everyone
else
to
also
be
active
and
honest.
G
I
usually
do
this
with
something
like
a
refereed
game,
maybe
in
a
style
of
what
arbitrarium
has
where
you
can
bisect
computation
and
play
out
this
little
game.
That
will
surely
tell
you
if
at
least
if
one
of
them
is
correct,
they'll
be
able
to
always
win
that
game
and
then
the
the
loser
will
will
be
slashed,
and
so
those
are
pretty
cool
and
it's
something
that
we've
been
looking
at
for
a
long
time
for
for
indexing.
G
That's
one
of
the
reasons
that
our
subgraphs
are
compiled
to
wasm
actually
is
because
wasm
is
amenable
to
these
fraud
groups.
The
next
stage,
which
is
the
the
highest
stage,
which
doesn't
even
require
one
event.
It's
just
fully.
Trustless
is
validity
proofs
and
with
validity
proofs
you're
able
to
show
the
the
consumer
somehow
that
the
computation
has
been
done
correctly.
G
G
So
to
get
everything
to
the
one
event
model.
We
need
at
least
fraud
proofs,
which
is
going
to
be
our
are
targeting
what
we
target
in
the
short
term
for
for
indexing,
and
then
we
can
have
anything
fraud,
proofs
or
higher.
G
G
So
rather
than
do
fraud
proofs
as
an
intermediate
step,
we'd
like
to
then
just
bring
validity
proofs
to
production
and
to
support
queries,
and
the
update
on
that
is
that
wrote.
G
The
roadrunner
snark,
which
is
the
snark
that
the
snark
force
has
been
developing
for
scalar,
is
nearing
the
the
design
freeze,
we'll
be
pulling
in
ariel
to
to
help
us
do
things
like
compare
the
verifier
gas
costs
on
l,
one
and
l
two,
and
once
we
kind
of
confirm
that
what
we
have
made
is
going
to
actually
be
efficient,
then
we'll
be
able
to
freeze
the
design
on
roadrunner
and
the
roadrunner
snark.
It
turns
out.
G
It
was
built
for
scalar,
but
it
turns
out
to
be
applicable
to
verifiable
queries
as
well.
So
our
next
step
there
once
we
finalize
the
design
of
roadrunner
for
scalar,
is
to
then
split
the
design
team
from
the
implementation
team.
So
the
implementation
team
can
get
scalar
over
the
line
and
into
production.
G
While
the
design
team
then
moves
to
adapt
roadrunner
to
verifiable
queries,
and
so
that's
that's
kind
of
what
we're
all
doing
to
to
help
move
eip-44s
forward
and
to
make
sure
that
the
graph
is
a
as
a
necessary
part
of
the
the
d-app
developer.
Experience.
A
Amazing
thanks
for
sharing
zack
very
nice,
I
I
think
that's
a
good
segue
into
some
iot
sam
who's
been
working
with
us
since
wave
one.
Maybe
we
can
share
exactly
what
the
team
has
been
working
on
and
and
and
just
do
a
quick
intro
on
particularly
this.
What
what
you
just
called
snark
force,
which
is
actually
a
name
that
I
quite
like
sam,
would
you
be
up
to
do
a
quick
update
here.
H
Sure
pedro,
so
I'm
going
to
I'm
going
to
just
give
a
a
little
background
on
semiotic,
because
yeah
so
you'll
we're
going
to
be
working
with
zach
and
the
rest
of
the
team
to
we've
been
working
with
zach
and
the
rest
of
the
snarkforce
team
to
design
these
new,
the
new
roadrunner
snark.
So
I'm
just
gonna
give
you
a
little
story
a
little
bit
bit
of
back
history
about
about
us.
H
We
we
were
co-founded
or
we
were
founded
two
years
ago
and
we
have
three
co.
Four
co-founders
three
of
us
we're
what
we're
all
we
all
come
from
r
d,
so
three
of
us
have
phds
and
the
fourth
one
was
working
in
r
d
as
well,
and
just
as
a
little
you
know
we
were
doing
industrial
r
d
specifically.
So
when
you
hear
r
d-
and
you
hear
phds-
you
probably
you
might
think
academia
and
academics
are-
are,
are
publication
driven.
H
So
that's
that's
how
their
careers
progress
is
through.
You
know
getting
papers.
Industrial
r
d
is
more
focused
on
delivering,
maybe
proofs
of
concept
or
pushing
things
into
production,
so,
for
example,
well,
and
specifically,
so
where
do
we
come
from?
So
half
of
us
came
from
the
a
ai
background
and
the
other
half
of
us
came
from
a
cryptography
background.
H
I
actually
have
experience
and
education
in
both
so
gokai,
so
gokai
seldomly
is
one
of
our
co-founders
he's
a
cryptographer
he's
also
a
professor
at
san
jose
state,
university
and
prior
to
san
jose
state,
university
and
semyotic.
He
was
he
was
at
samsung
and
at
samsung
he
designed
their
core
ip
for
their
cryptographic,
hardware
accelerators
for
their
sim
cards,
so
he
he
he
did
the
verilog,
which
is
a
hardware
description
language.
He
wrote
all
of
the
verilog
for
their
cryptography
accelerators
and
then
he
had
that
verilog
deployed
in
billions
of
devices.
H
So
he's
like
this
hardcore
mathematician
implementer
guy-
and
I
just
wanted
to
give
you-
you
know
a
feel
of
that.
You
know
that
sort
of
the
sort
of
team
that
we're
trying
to
build
or
that
we
are
building
and
then
so
this.
So
maybe
let's
go
to
the
okay.
So,
just
to
kind
of
complete
the
story.
So
for
the
first
year
of
of
the
company
we
focused
on
fully
homomorphic
encryption.
H
So
specifically,
what
we
did
was
we
were
building
a
a
tool
that
was
going
to
be
that
was
to
be
deployed
in
the
cloud
for
encrypted
machine
learning
as
a
service
using
fully
homomorphic
encryption.
So
what
so?
What
is
this
homomorphic
encryption
at
a
high
level?
It
lets
you
do
it
lets
you
compute
directly
on
encrypted
data.
So,
for
example,
let's
say
you
had
a
or
a
is
some
number
and
you
encryp,
I
encrypt
it,
and
I
have
b
and
b
is
some
number
and
I
encrypt
it.
H
I
can
send
you
a
and
b
and
you
can
add
them
and
you
can
multiply
them,
but
you
can
never
know
what
a
is
or
b
is
or
any
intermediate
result
and
through
addition
and
multiplication,
you
can
actually
do
a
lot
of
sophisticated
things.
For
example,
you
can
evaluate
deep
neural
networks
for
me,
so
you
can
do
all
the
work
for
me,
and
so
I
can
outsource
it
to
you
and
then
you
never
see
any
intermediate
results.
H
You
can
send
it
back
to
me,
so
we
did
that
for
a
year
we
got,
we
got
attention
and
money
from
the
government
from
the
military,
but
no,
you
know
not
a
lot
of
traction
from
from
industry,
because
their
industry
is
mainly
driven
by
regulation
and
regulations.
Don't
cover
currently
homomorphic
encryption.
H
So
then
we
we
started
working
after
about
a
year.
We
started
working
with
the
graph.
We
were
part
of
the
wave
one
graph
grants
and
we
did.
We
focused
on
reinforcement,
learning
initially
and
then,
after
that,
we
we
started
expanding
into
cryptography,
really
with
zach
being
the
champion
for
that,
and
so
now
we
have.
H
We
have
three
people
that
are
working
full
time
with
the
snark
force
and
this
team
you
know,
like
zach,
said
the
graph
in
engine
node
has
been
putting
a
lot
of
effort
for
for
several
years
into
planning
for
where
we
are
now,
and
so
we,
what
we
bring
is
a
solid
background
in
understanding,
cryptography
and
cryptographic
primitives,
and
you
should
see
it.
The
snart,
the
field
of
snarks
is
your
zero
knowledge
proofs.
H
This
field
is
exploding
right
now
in
terms
of
academic
ideas
and
progress,
and
the
stark
force
is
at
the
bleeding
edge
of
of
being
of
of
integrating
what
is
just
becoming
theoretically
possible
and
they're.
They
are
making
the
most
they're
turning
into
a
practical
system,
and
it
is
it's
going
to
be.
You
know
it
is
at
the
it's
at
the
leading
edge
of
of
snark
designs,
so
yeah,
so
I
think
I'll
stop
there.
H
We're
super,
happy
and
excited
to
be
working
on
this
and
it's
it's
gonna,
be
a
big
deal
when
it's
deployed.
A
Great
thanks,
sam
one,
quick
mentioning
of
justin
thawler
as
well.
That's
been
working
with
you
guys
and,
as
you
know,
as
well
right,
justin
tyler
is
on
the
side
working
also
on
zk
snarks,
with
with
the
team,
so
pretty
good
task
force
to
tackle
this
for
sure
great
going
back
to
alex.
You
mentioned
something
interesting
about
solana
and
an
issue
you've
been
you've,
been
trying
to
tackle
on
indexing
and
syncing
the
complexities
around
it.
How
you
can
bootstrap
the
indexer
community
in
a
secure
way?
Do
you
want
to
share
some
notes?
D
I
think
it
was
sort
of
a
follow-up
on
yaniv's
comment.
We
had
a
quick
chat
because
you
know
solana
is
going
to
produce
because
we're
doing
fire
hose
integration
of
solana.
It's
going
to
produce
a
lot
of
files.
There's
a
lot
of
data
there
like
talking
gigabytes
per
day,
sometimes
more
like.
Maybe
so,
there's
a
lot
of
data,
and
then
maybe
people
don't
want
to
go
about
and
produce
that
data
from
genesis,
because
that's
relatively
hard
and
they'll
want
to
share
the
already
produced
data.
D
D
This
is
similar
to
the
notion
of
I'm
going
to
say
time
working
again
warp,
sync
right
so
that,
like
either
subgraphs
or
fire
hose,
you
know
backing
history
could
be
shared
between
indexers
and
sold
and
have
an
economy
there
to
so
that
people
don't
need
to
pray
play
things
from
genesis
or
people
don't
need
to
replay
sub
graphs
from
the
beginning,
incurring
the
time
that
it
takes.
But
you
know
just
preload
them
in
their
databases,
so
it
was
just
had
a
few
ideas
about.
You
know
how
we
that
could
happen.
D
We
had
discussions
also
earlier.
You
know
about
sharing
information
about
the
pre-processed
subgraphs.
Let's
imagine
the
data
dumps
from
postgres
what
if
people
could
share
that
and
speed
up
their
syncing
and
then
continue
on
just
you
know
think.
But
how
do
you
verify
the
the
the
integrity
of
that
data?
How
would
you
verify
you
make
sure
that
it's
a
respectable
data
and
it's
validated
in
some
ways,
so
I
had
some
ideas,
but
that's
the
problematic
we're
seeing
arrive
or
the
opportunity,
I
would
say,
for
a
new
type
of
economy,
yeah
yeah.
I
You
know
I
just
want
to
add
to
that
that
you
know.
I
do
think
that
it's
a
really
great
thing
about
our
indexer
community,
that
most
of
the
indexers,
it
seems,
are
running
their
own
archive
nodes
and
they
are
validating
from
genesis,
and
I
think
that
is
really
important,
especially
in
a
world
where
we
expect
fewer
clients
to
actually
be
validating.
You
know
themselves,
you
know
the
the
role
of
the
indexers
in
the
graph
network
might
actually
be.
That
of
you
know
the
clients
in
a
blockchain
network
that
actually
do
the
validation
right.
I
I
How
do
we
encourage
as
many
people
as
possible
to
validate
the
chain
to
add
as
much
security
as
possible
to
the
different
networks,
while
still
allowing
for
a
path
for
folks
that
you
know
need
to
optimize
for
speed
and
maybe
weren't
going
to
do
it?
Anyways
and
then
still
you
know
what
are
the
security
kind
of
guarantees
going
to
be?
How
much
security
is
there
on
these
snapshots
that
are
getting
warp?
Synced.
D
J
I
Yeah,
just
another
piece
of
of
context
is
that
you
know
I
I
would
kind
of
separate
you
know
warp,
syncing
or
snapshots
of
the
underlying
blockchain,
like
full
node
data
and
the
snapshots
for
like
the
subgraph
data,
and
I
think
both
are
valuable.
I
You
know
if,
if
folks
are
trying
to
like
sync
solana
and
we
can
help
them
like
quickly
get
their
solana
nodes,
you
know
up
and
running,
I
think
that's
you
know
valuable,
and
you
know
the
indexers
are
probably
in
the
best
position
to
like
help
each
other
kind
of
get
up
and
running,
but
that
is
also,
I
think,
separate
from
like
subgraph
data,
so
I
guess,
there's
db
snapshots
of
subgraphs.
You
can
get
straight
to
to
sync
at
a
specific
block.
I
also
think
from
an
incentive
standpoint.
I
Some
of
the
you
know,
parts
of
the
of
the
problem
are
incentives
for
providing
the
snapshots
in
the
first
place
and
then
incentives
for
sharing
the
snapshots
with
others.
You
know,
for
example,
we
have
query
fees
in
the
network,
of
course,
which
is
you
know
for
end
users.
You
know
paying
for
queries,
you
know,
would
there
be
an
equivalent
here?
For
you
know,
for
example,
you
know
payments
per
byte
or
per
something.
If
you
just
want
to
like
download
a
giant.
You
know
multi-terabyte
snapshot
from
somebody.
K
There's
there's
also
the
kind
of
third
bucket
of
data,
which
is
the
fire
hose
output,
the
historical
flat
files
so
yeah,
the
underlying
node
data,
the
fire
hose
flat
files
and
then
the
subgraph
data
and-
and
each
of
them
you
know,
makes
sense
to
offer,
as
as
some
kind
of
sync
solution
or
bootstrapping
solution
for
new
indexers.
I
Would
the
firehose
data
be
easily
derivable
from
the
full
node
data.
I
J
In
some
ways,
yeah
in
some
ways,
the
firehose
data
feels
like
a
superior
option
for
like
nodes
to
load
data
from.
I
think
we
talked
about
this
in
lisbon
a
little
bit
but
like
even
if
you
look
at
like
what
eip4s
is
planning
to
do.
They're
planning
to
shim
together,
multiple,
like
different
versions
of
of
the
ethereum
client,
into
like
kind
of
a
virtual
client
that
can
actually
sync
from
genesis,
and
that
becomes
a
lot
easier
to
do
safely.
If
those
things
are
just
loading
from
like
fire
hose
files.
J
Right
like
you,
could
basically
have
one
version
of
the
node
syncing,
it's
outputting
fire
hose
files
and
then
the
next
one
kind
of
loads,
its
state
right
at
the
point
where
that
sort
of
the
consensus
logic
changed.
I
Yeah
alex,
were
you
saying
that
one
of
the
goals
for
fire
hose
with
it
was
that
you
could
basically
like
hydrate
a
full
node
from
the
fire
hose
data,
because
it
was
like
a
it
would
be.
A
superset.
D
So,
yes,
that's
a
design
goal
right
now.
Some
implementations,
for
example,
near,
does
not
have
the
full
data
to
be
able
to
do
that.
Maybe
I'm
wrong
here,
but
that's
the
goal
so
that
you
could
and
even
cross-hydrate
right.
You
extract
data
from
an
open
ethereum
and
you
hydrate
geth,
because
the
data
is
not
different.
D
K
Sorry,
aniva
yeah,
I
just
wanted
to
plus
one
brandon's
comment
about.
You
know
distributing
firehose
flat
files
as
as
the
kind
of
authoritative
source
for
that
data.
I
think
it's
going
to
be
way
easier
to
come
to
consensus
on
you
know
effectively.
The
correct
data
to
do
so
just
through
like
hashes
of
the
flat
files
is,
is
a
lot
simpler
and
and
much
easier
to
verify
than
recreating
in
a
rock.
D
I
So
it
seems
to
me,
like
the
rough
shape
of
the
solution
here,
would
be
to
have
some
kind
of
like
staking
contract,
where
you
know
you
would
register
here's
like
a
snapshot.
You
know,
provide
the
hash
and
then
like
some
kind
of
deposit
like
an
amount
that
you're
staking
on
that
snapshot.
I
K
K
You
know
like
their
versions
of
of
the
bittorrent
protocol,
that
can
run
entirely
yeah,
I
mean
we
could
effectively
distribute
data
that
way
and
in
some
sense
you
also
you
kind
of
get
like
a
scaling
of
capacity
and
can
and
kind
of
social
consensus
based
on
the
number
of
seeds
for
a
given
hash.
K
I
J
J
And
then
the
second
part
is,
is
you
know
how
do
you
prove
the
correctness
of
the
files
generated
by
the
fire
hose
like
instrumented
node,
and
that
looks
a
lot
closer
to
like
verifiable
indexing
right
like
where
you
want
some
kind
of
deterministic
process
that
you
know
you
could
do
potentially
like
a
bisection
protocol
figure
out
where
two
nodes
diverge
and
then
proof
you
know,
prove
that
you
know
one
execution
path.
Was
you
know
was
correct
and
the
other
one
was
was
incorrect
and
there
might
be
stake
attached
to
that
right.
A
I
Yeah
I
was
just
saying
I
wouldn't
expect
that
we
would
be
able
to
run
like
blockchain
nodes
in
wasm
to
be
able
to
do
like
a
full
bisection
protocol.
But,
like
you
know,
sinking,
nodes
kind
of
by
construction
of
the
blockchain
nodes,
like
should
be
deterministic
and
you
should
be
able
to
get
to
consensus
and
you
might
not
be
able
to
prove
on
chain.
You
know
where
the
sinking
went
wrong,
but
it
feels,
like
you
know,
voting
like
some
kind
of
you
know.
Voting
process
could
be
used.
D
D
It
could
be
verified
and
you
would
publish
from
time
to
time,
but
have
that
granularity
and-
and
maybe
when
you're
in
the
more
real
time,
because
there's
that
issue
also,
if
you
want
speed
right,
there's
a
trade-off
between
allocations
that
you
put
when
you
close
an
allocation.
It's
let's
say
it's
10
hours
late
and
then
you
have
all
that
time
that
could
have.
You
know
things
that
are
happening.
D
J
I
I
was
wondering
about
the
staking
part,
though,
because
if
on
top
of
creating
the
data
set
and
then
providing
the
data
set,
meaning
bandwidth
for
downloading,
you
need,
on
top
to
provide
more
financial
means
to
to
deliver
it.
Basically
as
a
service.
Is
that
the
right
approach?
J
And
can
we
not
have
something
that
the
payment
for
getting
the
data
from
the
provider
is
locked
for
a
certain
time
and
can
be,
can
be
returned
or
something
like
that
and
provide
the
sort
of
the
the
protection
in
terms
of
the
data
correctness
in
that
sense,
rather
than
having
to
stake
on
already
potentially
significant
resources
that
you're
putting
in.
I
It's
an
interesting
point,
so
is
this
something
that
we
feel
like
it
doesn't
seem
like?
It
would
be
a
blocker,
let's
say
for
solana
support,
but
I
guess
you
know:
we've
opened
up
the
conversation
now
to
say
that
this
is
probably
something
that
we
should
tackle
at
some
points.
Maybe
we
find
an
owner
to.
You
know,
run
a
little
bit
further
kind
of
thinking
through
a
proposed
design,
and
I
mean
do
we
want
to
reflect
on
like
what
kind
of
time
frames
would
make
sense
to
try
to
get
something
like
this.
D
I'd
participate
in
some
group.
I,
like
all
the
verifiable
tea
things
I
wouldn't
lead,
but
I'd
like
to
participate
and
work
on
that.
As
for
the
time
frame,
you
know,
I
think
what
we're
going
to
see
is
that
people
are
going
to
have
handshake
deals
to
share
these
files
like
it's
going
to
have
cost
you.
I
don't
know
10
000
bucks
to
process,
so
you
maybe
you
sell
it
for
2
000,
because
you
hope
to
sell
it
five
times,
and
then
we
can
perhaps
gather
some
of
that.
D
You
know
index
or
community
insight
and
see
how
we
transform
that
into
a
on-chain
business
and
and
and
then
secure
it
in
some
ways,
but
I
think
you
know
writing
contracts
for
that
designing
the
thing
is
going
to
take
a
little
bit
of
time.
So
if
we
want
it
done
at
some
point
and
the
economy
to
participate
in
the
graph
economy
right
with
the
token
so
we'll
it's
better
sooner
than
later
to
at
least
think
about
it.
J
A
A
Yeah,
no
worry.
We
just
need
to
find
one
owner
then,
and
we
have
a
task
force
now:
cool,
okay,
we're
right
at
the
top
of
the
hour.
I
think
we
should
wrap
up
here
nice
discussion,
pretty
cool
thanks
for
joining
all
happy
christmas
happy
new
year,
I'll,
see
you
in
a
month
do
be
on
the
lookout
in
the
forum
we'll
be
sharing
some
new
updates.
As
mentioned,
we
will
be
revamping
the
foundation's
notion,
so
we
have
all
of
these
working
working
groups
sharing
updates
in
there.