►
From YouTube: The Graph’s Town Hall #3 May 4th, 2021
Description
The Graph’s Community third Graph Protocol Townhall
This video was recorded: Tuesday, May 4 @ 8am PST, 2021.
A
So,
thank
you
very
much
everyone
for
joining
us
for
the
third
protocol
town
hall.
These
have
been
going
on
about
monthly
and
we've
seen
such
great
responses.
So
thank
you
again
for
joining.
Let
me
just
share
my
screen.
So
just
going
over
today's
agenda,
we'll
do
a
few
foundation
updates.
Then
brandon
and
arielle
will
take
over
protocol
updates
and
we'll
leave
open
some
time
for
a
q.
A
so
april
was
a
fairly
busy
month
for
us.
A
We've
got
quite
a
few
initiatives
going
on
so,
firstly,
on
the
multi-blockchain
front,
we
created
support
for
cello,
avalanche
and
moonbeam.
So
this
brings
our
total
support
for
19
chains,
most
of
them
evm
based
so
very
exciting.
If
you're
building
on
anything,
you
know
evm
or
ethereum,
the
graph
likely
supports
it
already
in
terms
of
migration.
We
started
phase
one.
Last
week
you
might
have
seen
a
large
announcement
with
10
subcrafts
that
are
being
migrated
from
the
hosted
service,
so
audience
dodo.
Enzyme,
gnosis
live
pier
m,
stable,
open,
pull
together.
A
Reflexer
and
uma
are
the
first
set
of
migrators,
so
we're
very
excited
to
be
working
with
these
teams
and
please
reach
out
to
us
or
take
a
look
at
the
forum
that
was
in
our
blog
post
and
let
us
know
if
you
wanted
support
through
your
migration
phase
as
well.
We
also
posted
an
index
or
migration
guide
in
the
discord
and
in
the
forum.
So
if
you're
an
indexer,
you
know
maybe
you're
on
testnet
or
on
mainnet
and
are
looking
to
get
involved
in
the
migration
and
start
testing
scalar.
A
Please
take
a
look
at
that
guide,
but
if
you're
also
a
new
indexer,
we
have
quite
a
few
resources
ready
for
anyone,
who's,
just
ramping
up
on
testnet
and
depending
on
the
kind
of
environment
you're
building,
so
feel
free
to
reach
out
either
to
you
know
us
or
message
another
index
or
on
the
discord
to
get
started.
A
Third,
I
wanted
to
cover
educational
modules,
so
rheem
ecosystem
manager
of
the
foundation
and
myself
have
been
working
hard,
the
last
few
months
to
develop
these
courses
for
academic.
You
know
programs
and
students
that
want
to
learn
more
about
the
graph.
So
we're
excited
to
be
initiating
two
programs
with
york
university
in
canada
in
the
hague
and
netherlands
and
lastly,
just
want
to
cover
grantees.
A
So
wave
two
is
open.
You
might
have
seen
the
application
feel
free
to
submit
at
any
point
and
we'll
make
sure
to
reach
out
to
you
if
the
project
is
aligned-
and
we
wanted
to
just
highlight
a
few
more
grantees
this
time
around
so
excited
to
introduce
rachel
black
of
good
ghosting
will
just
give
us
an
overview
of
her
grant.
B
Hi
yeah.
Thank
you.
Thank
you
for
inviting
me
on
massive
fans
of
everything
that
the
graph
is
doing,
and
it's
so
fantastic,
just
to
see
like
rolling
out
to
different
networks,
we're
about
to
launch
our
matic
and
we're
also
exploring
cello
and
boom
boom
boom.
There
you
are,
but
what
is
good
ghosting?
What
are
we
doing?
We're
basically
building
an
application
and
protocol
to
incentivize
saving?
B
We
want
to
stop
saving
being
a
chore
and
make
it
something
much
more
fun,
so
we're
in
the
process
of
finalizing
our
mvp,
which
will
go
live
on
matic
polygon.
Basically,
it's
gonna
be
a
savings
pool
that
will
run
for
a
fixed
amount
of
time,
so
it
goes
to
like
a
month
and
then,
every
week
in
that
month
you
have
to
contribute
fixed
amount.
B
Like
fifty
dollars
worth
say,
we
contributed
onto
a
on
to
ave
until
lending
protocol
generates
lots
of
lovely
yield
and
then
we
can
work
out
at
the
end
of
the
pool
like.
Were
you
a
good
saver?
Did
you
hit
all
of
your
targets?
Fantastic,
if
you
didn't
too
bad,
but
you'll
still
get
your
deposit
and
your
principle
that
you've
put
in,
but
the
principle
is
going
to
be
split
amongst
those
who
were
sort
of
regular
savers.
So
it's
it's
a
really
nice
way
to
get
higher
returns
than
you
would
do
anyway.
B
We're
also
going
to
be
adding
a
bit
of
sponsorship
in
there,
so
it'll
be
quite
should
be
a
quite
nice
juicy
return,
depending
on
how
everyone
else
does
you
can
get
like
a
higher
return
as
well
and
yeah.
We're
just
super
excited
to
be
building
with
a
graph
and
and
lots
of
stuff
happening
as
well.
We're
also
doing
community
call
on
thursdays,
so
if
you're
curious
jump
in
on
that,
it's
at
4pm
cet
time
I'll
post
a
link
to
our
discord
as
well
in
case
anyone's
interested
yeah.
A
Thank
you
rachel
and
just
as
more
context
for
everyone.
Good
ghosting,
you
know,
is
building
subgraphs
and
using
them
for
their
dap,
but
they're
also
just
revolutionizing
the
way
we
think
about
defy
and
they're
gamifying
it
to
allow
more.
You
know
younger
or
less
technical
users
to
enter
so
the
graph
foundation
is
here
to
support.
You
know
all
kinds
of
web
three
jobs
that
are,
you
know
improving
quality
of
life
for
our
society.
A
Next
up,
riemann
martin,
will
share
a
recording
from
one
of
our
other
grantees
aditya
yeah
thanks
eva
hi,
everyone
for
those
who
don't
know
me,
my
name
is
rheem
and,
as
she
mentioned,
I'm
the
ecosystem
manager.
Here
at
the
graf
foundation,
it's
been
really
fun
and
exciting.
Getting
to
know
all
of
you,
especially
through
the
amazing
work
that
you've
done.
One
specifically
that
we
wanted
to
highlight
was
aditya's
work
on
the
graph
network
visualization
tool.
A
This
dap
will
really
help
people
appreciate
what
a
truly
decentralized
network
will
look
like.
It
will
not
only
visualize
the
graph
network,
but
it
will
also
show
data
in
an
easy
to
understand
and
interactive
way,
but
also
showcase
act,
network
activity
and
visualize
the
links
between
the
users
and
educate
new
user
network
participants.
A
Now
aditya
would
have
loved
and
he
mentioned
that
he
I
believe
he
might
be
actually
in
this
call,
but
he
does
not
have
the
strongest
of
internet
connections,
so
he
shared
a
little
video
with
us
to
watch
so
just
bear
with
me.
While
I
share
that-
and
hopefully
just
somebody
give
me
a
thumbs
up-
that
you
can
see
amazing.
C
So
hi
guys
I
am
king
super
from
discord.
I
have
been
an
early
supporter
of
graph
protocol.
I
have
been
a
test
net
curator
and
also
one
of
the
wave
one
grantees.
So
today
I
wanna
give
a
demo
of
what
I've
built
recently.
C
I
have
created
a
visualization
for
graph
network,
so
we
all
know
what
a
centralized
service
looks
like
right.
It's
a
basically
it's
a
central
server
controlling
all
the
stuff.
Have
you
guys
ever
wondered
what
a
decentralized
network
might
look
like
and
what
could
be
a
better
example
of
a
decentralized
network
than
our
own
graph
network
right?
So,
let's
just
jump
into
it.
C
So
all
these
blue
nodes
are
indexers
and
all
these
yellow
ones
are
delegators,
as
you
can
see
here,
and
the
green
ones
are
the
curators
which
are
actively
signaling
on
the
surface
and
the
oval
shape
red
ones
are
the
sub
graph
okay.
So
what
I
did
is
basically
combined
all
the
roles,
all
the
four
roles:
indexer
delegates,
curators
sub
graph
by
edges.
So
basically,
what's
what
is
the
significance
of
an
edge?
C
C
So
here
is
our
pull
together
sub
graph,
so
that
would
mean
if,
if
there
is
an
edge
between
a
sub
graph
and
indexer,
so
that
would
mean
that
that
inductor
is
indexing
that
sub
graph.
Similarly,
if
there
is
an
edge
between
a
curator,
this
green
node
is
a
curator,
so
that
would
mean
there
is
an
that
curator
is
actively
signaling
on
that
sub
graph.
C
So
you
may
be
thinking
what
is
the
motivation
behind
this
kind
of
visualization?
So
I
think
this
kind
of
visualization
really
help
us
appreciate
the
beauty
of
decentralization
right.
You
cannot,
I
think,
yanis
recently
tutored
about
this,
that
you
cannot
just
launch
a
token
and
say
our
network
is
decentralized.
It
can't
be.
C
It
can't
be
done
like
this
right.
This
is
how
our
true
decentralized
network
looks
like
if
you
hover
over
any
node.
Let's
say
I
go
to
this
indexer,
so
it
will
show
me
all
the
major
information
related
to
that
indexer.
Basically,
its
role.
What
is
what
is
the
role
of
that
node
is
the
index
right
is
address,
is
reward,
query
card
indexes
check
token
delegated
token.
Is
it
over
delegated
or
not
all
kind
of
stuff?
Similarly,
if
you
go
over
any
delegator,
so
it
will
show
some
of
the
information
about
delegator
as
well.
C
Similarly
true
for
sub
graphs
and
curators,
so
in
a
nutshell,
a
visualization
like
this
acts
as
a
summary
for
our
graph
network.
You
can
you
can
say
so
it
it
helps
you
identifying
the
indexer,
which
has
the
largest
number
of
the
delegators.
Let's
say
you
can
find
just
by
looking
at
this
visualization,
what
all
indexer
are
the
most
trustworthy,
because
you
can
judge
by
the
number
of
delegators
that
are
that
have
in
delegated
to
their
indexer,
or
you
can
find
out
what
all
sub
graphs
are
being
actively
indexed
by
indexers.
C
Moreover,
it's
a
dynamic
visualization.
So
what
I
mean
by
this
is,
if
you
hover
over
any
node,
it
will
show
all
the
related
information
and
all
the
other
nodes
which
are
connected
to
that
node.
It
will
highlight
those
edges
and
you
can
zoom
in
to
view
a
specific
section
of
the
network
or
you
can
zoom
out
or
you
can
just
play
with
it.
You
can
just
drag
or
drop.
You
can
just
drag
a
node
around
in
the
network.
C
C
A
Awesome
thanks
aditya,
if
you're
on
this
call,
it
was
really
great
to
see
and
just
a
small
shameless
plug,
but
tell
all
your
friends
wave
two
applications
are
open
and
we'd
love
to
see
what
kind
of
ideas,
innovations
and
community
initiatives
all
of
you
have
to
bring
forth
to
the
to
the
ecosystem
thanks
and
I'll
pass
it
back
to
eva
awesome.
Thank
you
and
I
just
shared
that
link
to
the
visualization.
A
If
you
guys
want
to
check
it
out
and
if
you
have
any
feedback,
feel
free
to
reach
out
to
aditya,
and
so
next
up
we'll
have
brandon
arielle,
giving
up
protocol
updates.
D
D
I
actually
believe
we've
fixed
that
issue.
We
had
to
restart
the
the
seed
node
that
we're
hosting
for
the
graph
specifically,
but
in
the
meantime,
radical
also
released
to
breaking
change,
which
is
0.2
and
a
lot
of
you
guys
have
already
upgraded
to
that
version.
Unfortunately,
that's
a
change
that
needs
to
be
upgraded
across
the
entire
ecosystem,
lockstep.
So
all
the
seeds,
all
the
clients,
all
need
to
be
upgraded
to
the
0.2x
set
of
releases.
So
some
people
in
our
ecosystem
right
now
are
on
the
1.,
like
1x
somewhere
on
the
2x.
D
Hopefully
we'll
be
upgrading
across
the
board
soon,
but
for
now
there
just
might
be
some
some
of
you
that
still
have
broken
clients,
at
least
with
respect
to
the
the
gip
repo
okay.
So
the
next
bit
of
housekeeping
is
on
some
gips
that
we've
already
talked
about
in
the
past.
D
The
first
one
is
gip
three,
so
we
did
a
count.
This
one
has
already
been
voted
on
by
the
community
in
snapshot,
but
we
hadn't
yet
done
a
protocol
upgrade
for
this.
So
the
council
voted
on
this
in
graph
governance
proposal.
Two
that
passed
actually
just
this
morning
was
the
the
end
of
that
vote,
and
so
we've
initiated
the
first
transaction
for
that
upgrade.
D
These
are
two-part
upgrades,
and
so
you
should
expect
to
see
that
this
week,
just
as
a
reminder
to
folks,
that's
just
fixing
a
a
a
minor
bug
for
certain
edge
cases
in
the
rewards
contract.
So
it's
not
something
that
should
generally
impact
you
shouldn't
need
to
do
any
kind
of
upgrades
on
your
software
or
on
any
of
your
index
or
strategies,
or
anything
like
that.
D
The
next
one
that
we
have
this
in
the
pipes.
This
is
to
help
support
the
scalar
upgrade.
This
has
been
audited
at
this
point.
This
is
a
withdrawal
helper
upgrade
and
I'll.
Let
actually
arielle
give
him
a
little
bit
more
color
on
this.
E
Yeah,
hello,
I'm
going
to
show
you.
Let
me
share
my
screen,
I'm
going
to
show
you
the
our
ripple
with
the
latest
improvements
in
the
proco,
the
the
one
that
brandon
is
mentioning.
Is
this
one
withdrawal
helper?
This
is
a
contract.
We
need
to
connect
the
the
funds
held
in
the
different
channels
to
this
taking
contract.
E
Okay,
so
this
contract
is
already
audited
and
what
we
will
do
is
to
deploy
it
to
mainnet,
and
this
will
require
a
governance
boat
to
set
this
contract
as
a
source
of
funds
for
the
for
the
seeking
contract.
So
we
can
use
it
to
send
funds
to
the
protocol.
So
this
is
one
of
the
upcoming
governance
boats
that
we
will
like
propose.
E
Then
there
are
a
couple
of
other
improvements
that
are
related
to
bug
fixes
of
each
cases
this.
This
one
is,
I
think
I
I
talk
about
this
one.
E
This
one
in
the
last
stone
hall
is
audited
and
it's
fixing
a
condition
where,
if
you
use
this
function,
called
stake,
2
you
might
skip
setting
the
delegation
parameters
for
your
indexer,
so
we
fixed
that
each
case
again,
if
you,
if
you
visit
this
pr
you'll,
see
like
the
complete
description
of
the
of
the
case
of
how
it
works
and
the
solution
of
what
is
like
going
to
be
proposed
and
the
other
one
is
some
condition
we
detected
when
closing
allocations
with
poi
in
xero.
E
The
update
snapshot
that
we
need
to
calculate
rewards
was
not
properly
called
so
indexers
sending
pois
with
zero
with
a
zero
proof
is
not
very
usual,
but
it
might
happen
and
in
those
cases
the
the
the
rewards
calculation
is
not
properly
done
for
for
the
for
the
time
it
takes
to
send
the
next
update.
So
that's
another
each
case
with
fix
it's
already
audited
too,
and
the
last
improvement
is
this
pr.
That
is
basically
creating
a
cache
of
count
of
addresses
in
each
of
the
contracts.
E
So
we
spend
less
gas
in
transactions.
Currently,
the
protocol
works
by
having
a
registry
of
all
the
addresses
of
the
different
contracts
and
going
to
look
for
each
of
these
addresses
is
quite
expensive.
It
requires
a
call
opcode
and
this
pr
introduces
like
a
local
cacher.
So
so
it's
we
are
s
loading
the
variable,
and
that
way
we
save
some
gas.
So
this
is
another
improvement
that
we
are
going
to
propose
already
audited
to,
and
the
last
one
that
I
want
to
mention
is
it's
more
of
a
feature.
E
It's
disputes.
The
dispute
mechanism
currently
have
a
like
workspace,
slashing
indexers,
and
we
currently
have
just
one.
We
have
just
one
wire
variable
to
to
configure
the
slashing
percentage.
So
let's
say
if
it's
set
to
10,
whenever
you
present
a
bad
proof,
you
go
through
the
dispute
process
and
we,
like
the
protocol,
will
slash
you
by
ten
percent.
E
The
issue
with
that
is
that
the
same
percentage
is
used
for
in
for
proof
for
index,
improve
and
qualify
and
queries
and
query
attestations.
So
the
the
issue
with
this
is
that
an
indexers
will
be
responding
to
queries.
Much
more
often
than
present
improves,
I'm
probably
setting
to
the
same
percentage
is
too
high.
E
So
we
wanted
to
split
these
percentages
to
different
values.
So,
let's
say
a
bad
battery
index.
Improved
will
be
slashed
by
two
percent
and
the
about
quality
station
will
be
slashed
by
0.5,
so
this
pr
allows
to
set
different
percentages
so
that
that's
most
of
the
of
the
things
this
pr
is
also
already
audited,
like
everything
I'm
I'm
I'm
like
talking
now
is
it's
audited
by
violators,
so
yeah
I'll
share
more
information
about
the
audits
on
on
the
forum.
D
Yeah
the
meta
update
here
is
that
we've
gotten
a
lot
better
about
booking
auditor's
time
in
advance.
So
we
have,
I
think,
starting
in
may
we
have
one
of
our
first
retainers
coming
online
and
then
another
one
coming
online
in
june,
so
we're
hoping
to
have
a
much
more
steady
stream
of
audits.
You
know
from
this
point
forward,
for
you
know
future
updates
to
the
protocol,
so
the
the
separate
slashing
percentages
is
kind
of
part
of
this
bigger
effort
to
make
arbitration
disputes
more
clear
and
consistent.
D
So
we
have
a
gip
that
we're
going
to
talk
about
in
a
little
bit,
but
there's
one
more
kind
of
dependency
that
feeds
into
that
and
that's
deterministic
wasm
based
gas
costing
and
leo's
gonna
from
edunote
is
going
to
be
talking
a
little
bit
about
that.
D
And
I
believe
that
zach
already
introduced
the
kind
of
the
work
in
progress
of
this
in
the
last
protocol
town
hall.
So
this
is
leo's
picked
up
on
that
work
and
has
kind
of
taken
that
over
the
finish
line
just
for
context,
you
can
check
out
the
recording
from
last.
F
Time,
hello,
let
me
share
my
screen.
F
Okay,
so
this
this
is
just
a
draft
of
the
jp
that
I
have
that
I'm
working
on
it's
not
on
reticle.
Yet
so
it
might
change
a
lot
still,
but
so
so
gas
costing
the
reason
it's
necessary.
F
What
it
does
is
it
prevents,
and
you
can
prove
that
a
handler
has
is
too
expensive
or
has
run
for
too
long
a
handler
for
a
trigger
in
this
upgrade
and
right
now
we
only
have
timeouts
which
they
help,
because
they
prevent
the
handler
from
just
like
exhausting
the
the
inductor's
resources
on
the
machine
that
it's
running,
but
it's
not
deterministic.
F
So
that's
a
problem
for
protocol
security
because
we
need
to
be
able
to
prove
that
that
handler
is
too
expensive
so
and
that
the
the
subgraph
cannot
make
progress
past.
That
block,
so
basically
everything
that
a
handler
does
needs
to
have
a
gas
cost
and
therefore
needs
to
be
a
limit
in
this
gas
cost.
And
this
is
not
unlike
the
gas
costing
that
you
see
in
blockchain
protocols,
you
know
and
gas
limits
for
blocks
and
such
so
one
unit
of
our
gas.
We
as
a
reference.
F
We
say
that
it's
worth
0.1
nanoseconds
of
execution
time.
This
does
not
need
to
be
like
a
straight
correlation,
but
it
there
needs
to
be
some
some
reference
of
execution
time
with
the
so
that
it's
actually
measuring
execution.
F
So
and
then
the
limit
is
one
hour
which
equates
then
to
36
trillion
gas,
and
this
is
this-
is
like
larger
than
any.
Indexer
might
actually
want
to
run
the
handler
for
and
you
can
set
a
lower
timeout.
F
But
if
the
you
know
the
fisherman
or
the
arbitrator
needs
to
actually
prove
that
the
handler
is
too
expensive,
then
this
is
the
then
we
say
that
okay
for
an
hour,
it's
reasonable
to
to
prove
that
the
gas
cost
is
overdosing,
and
then
we
get
into
the
technicalities
of
how
gas
cost
is
measured.
So
you
know
handler
does
two
things:
it's
either
executing
wasm
instructions
or
it
is
executing
a
host
function
that
was
called
webclass.
F
So
we
have
this
technique
to
instrument
the
resin
blocks
and
injecting
callbacks
to
measure
the
gas
costs
of
the
instructions,
and
you
can
see
that
see
that
in
the
implementation
itself
the
cost
for
each
instruction
and
then
there
is
a
cost
for
each
host
export,
such
as
store
guests
or
sets.
You
know,
call
and
all
of
the
other
utilities
that
we
have
and
they
cost
it
proportionately
to
their
input.
F
F
Constants
and
the
actual
numbers
for
causing
these
guests,
these
hosts
experts
or
host
functions.
So,
for
example,
some
interesting
limits.
The
gas
cost
for
each
on
call
is
pretty
high.
This
is
because
the
you
know
we
set
a
high
gas
limit
for
the
ethereum
gas
that
the
call
can
take
so
so
this
sets
a
maximum
of
one
thousand
four
hundred
bunch
of
calls
per
handler,
which
should
be
enough
for
any
reasonable
subgraph
starter
set.
F
The
limit
is,
you
know,
250
000
entities
or
one
gigabyte
of
data,
forget
10
gigabytes
of
data
or
10
million
entities.
F
So
you
know,
there's,
like
all
the
math
operations
begins,
big
decimal.
So,
for
example,
one
one
that's
particularly
expensive,
there's
no
big
power,
which
is
like
the
exponentiation
function
and
that
doesn't
have
an
exponential,
complex,
computational
complexity.
So
this
you
know,
if
you
use
an
exponent,
that's
large,
you
could
actually
reasonably
go
over
the
gas
cost,
so
that
has
like
a
more
practical
implication
here.
D
Oh
yeah
thanks
and
like
yeah,
like
leo
said,
you
know
an
important
clarification
here
is
like
the
goal
right
now
isn't
for
it
to
be
super
super
precise.
The
main
goal
for
this
wave
of
this
is
to
kind
of
achieve
determinism.
D
Obviously,
you
know
leo
looked
carefully
at
like
the
time
complexity
of
these
things
and
tried
to
get
it.
You
know
right
on
the
rough
order
of
magnitude,
but
we
can
refine
these.
You
know
constants
over
time
and
we
can
also
do
more.
D
You
know
with
you
know
the
gas
cost
right
so
right
now
it's
just
going
to
be
a
protocol
based,
you
know,
sort
of
default
limit,
but
you
could
even
have
this
like
be
exposed
in
like
a
subgraph,
manifest,
for
example,
as
like
a
hint
to
indexers
that
hey
this,
you
know
this
subgraph
is
like
relatively
easy
to
index
compared
to
you
know
this
other
this
other
subgraph.
So
this
is
kind
of
foundational,
but
there's
a
lot
of,
I
think,
work
and
and
proposals
that
could
come
out
of
this.
D
That's
all
right:
more
teams
got
you
cool,
so
we
got
one
more
gip.
This
is
another
one.
That's
work
in
progress
should
be
published,
hopefully
today
or
tomorrow,
I'm
related
to
determinism.
This
is
another
one
from
me
and
I'll
walk
you
through
it
very
quickly.
D
This
is
not
a
like
protocol
upgrade,
so
this
is
a
process
that
the
graph
council
and
the
community
and
the
graph
core
developers
would
need
to
kind
of
follow
in
order
for
this
to
work,
and
the
goal
of
this
process
is
to
establish
what
is
the
canonical
behavior
of
the
subgraph
api
and
the
protocol
at
any
given
time
and
how
do
the
features
of
that
subgraph
api
interact
with
the
features
of
the
protocol,
so
there's
there's
two
kind
of
motivations
behind
this
one
of
them
I
already
kind
of
hinted
at,
which
is
that
you
know
we
want
to
be
able
to
support
things
like
arbitration
and
the
protocol
to
secure
and
provide
like
good
guarantees
for
the
integrity
of
query,
results
and
indexing,
and
that
requires
determinism.
D
So
that's
why
you
saw
you
know
leo
give
this
presentation
around
the
wasm
gas
costing,
but
we
also
need
to
have
everyone
in
the
network
sort
of
agree.
What
is
the
correct
version
of
the
behavior?
You
know
for
actually
implementing
the
the
subgraph
api,
so
that's
kind
of
the
first
goal
here.
D
The
second
goal
is
a
little
bit
more
subtle,
but,
as
you
all
know,
you
know
edunode
as
an
operator
of
a
centralized
indexer
today,
the
quote,
unquote
hosted
service
you
know
is,
is
adding
support,
for
you
know
change
at
a
very
rapid
rate.
The
graph
node
core
team
are
also
adding
new
features
at
a
very
rapid
rate
effectively.
You
know
the
graph
is
a
protocol.
That's
still
under
rapid
development
and
not
every
feature
right
out.
D
The
gates
is
going
to
be
compatible
with
the
full
range
of
protocol
features
in
the
decentralized
network,
and
so
part
of
what
this
process
is
also
trying
to
establish
is
you
can
kind
of
think
of
it
as
a
life
cycle
or
stages
for
features
to
be
added
to
the
decentralized
network
immediately,
but
then
have
gradual
support,
or
you
know,
granular
support
added
over
time,
and
what
that
lets
us
do
is
instead
of
having
to
send
traffic
through
the
hosted
service
or
to
some
other,
you
know,
centralized
indexer.
D
It
allows
us
to
divert
all
traffic
through
to
the
decentralized
network,
provides
clarity
to
consumers
and
indexers
that
are
using
those
subgraphs
in
the
decentralized
network.
D
What
protocol
features
that
subgraph
is
going
to
be
interacting
with
so
that
might
be
a
little
bit
abstract,
I'm
actually
just
going
to
scroll
down
and
jump
to
the
example.
I
included
here
and
actually
we're
looking
at
markdown.
So.
D
So
this
is,
this
is
meant
to
be
illustrative.
This
isn't
the
this
isn't
meant
to
be
the
exact
matrix,
but
the
idea
is
that
on
this
left
column
here
we
have
features
of
the
subgraph
api
right.
So
we
have
things
like
full
text
search.
We
have
things
like
ethereum
mappings
that
can
call
out
to
ipfs.
D
We
have
things
you
know
like
some
of
the
multi-blockchain
ones
that
are
being
supported,
ethereum,
test
nets
and
so
on,
and
not
all
of
these
are
going
to
have
the
same
levels
of
determinism
right,
so
full
text
search
for
example,
right
today,
is
at
the
stage
of
development
that
it's
at
and
and
research.
It
is
deterministic
with
respect
to
indexing,
but
it's
it
is
not
deterministic
with
respect
to
querying,
and
so
I
think
this.
D
This
is
something
that
we
saw
created
a
lot
of
confusion
a
week
or
two
ago,
when
we
did
the
first
round
of
migrations
to
the
decentralized
network.
The
omen
subgraph
got
published
and
the
omen
subgraph
has
full
text
search,
and
so
a
lot
of
indexers
were
kind
of
scrambling
like
well.
Do
we
index
this?
Do
we
not
index
this?
What
is
what
is
the
nature
of
interacting
with
the
subgraph?
What
does
that
mean
to
us
for
our
participation
in
the
network?
D
Consumers
would
have
the
same
questions,
so
the
goal
is
to
establish
a
matrix
like
this.
That
would
say.
Okay,
this,
you
know
subgraph
element.
It
uses
full
text
search.
Okay.
That
means
yes,
in
fact,
it
is
eligible
for
indexing
rewards.
Yes,
it
is
eligible
for
proof
of
indexing
disputes,
arbitration
and
slashing,
however,
because
the
queries
that
involve
full
text
search
are
not
deterministic,
yet
it
would
not
be
eligible
for
slashing
of
query
slashing
based
on
query
disputes
using
query,
attestations
and
the
matrix.
D
The
matrix
is
actually
more
complex
than
you
would
expect.
There
are
some
features
which
are
sort
of
trivially
deterministic
with
respect
to
indexing,
but
not
querying
some
that
are
deterministic
with
respect
to
querying,
not
not
indexing,
and
then
there's
some.
D
I
don't
have
it
in
my
list
here,
but
there's
also
some
features
that
are
just
simply
experimental
right
where
you
don't
want
to
commit
to
an
api
too
early,
because
developers
are
going
to
take
a
really
big
dependency
on
that,
and
so
you
might
still
want
to
have
the
freedom
to
make
breaking
changes.
As
you
know,
the
graph
node
core
team
and
contributors,
you
know
collect
feedback
on
that
api
and
that's
another
great
example
of
where
you
wouldn't
necessarily
want
to
enable.
D
So
let
me
get
into
concretely
just
what
this
could
look
like
in
the
network.
So,
let's
pop
back
over
to
the
gip,
so
at
a
higher
level,
this
is
kind
of
the
important
things
to
to
kind
of
keep
in
mind
for
the
process.
So
this
is
all
kind
of
elaborated
in
prose,
but
I'm
just
going
to
kind
of
walk
you
through
the
bullets.
D
So
the
main
thing
here
is
that,
given
that
the
graph
node
or
and
given
that
the
protocol
itself
is
still
just
you
know,
relatively
new,
as
far
as
protocols
go,
you
know
launched
back
in
december.
Still,
you
know,
features
being
added
all
the
time
under
super
rapid
development.
It's
pretty
hard
right
now
to
get
a
stable,
like
client,
independent,
technical
specification
of
the
complete
subgraph
api.
You
know
it's
it's
it's
sort
of
just
like
waterfall
versus
agile
trade-off.
You
know
that
you
see
a
lot
of
companies
in
the
traditional.
D
You
know
software
space
making
so
for
right
now.
The
proposal
here
is
that
graph
node
be
used
as
the
reference
implementation
for
the
subgraph
api,
and
what
that
means
is
that
for
a
given
version
of
graph
node,
any
behavior
that
it
implements
for
the
subgraph
api
is
sort
of,
by
definition
the
the
canonical
behavior
so
that
that
even
includes
like,
if
there's
you
know
some
buggy,
you
know
something
that
you
might
consider
buggy
for
the
purposes
of
proofs
of
indexing
and
slashing
and
attestations
and
disputes.
D
The
other
proposal
here
is
that
graph
node
be
the
source
of
truth
for
feature
detection.
So
I
just
showed
this
you
know
matrix
over
here,
and
you
know,
I
think
the
first
thing
people
will
be
wondering
is
like
well.
D
How
do
I
figure
out
what
features
a
subgraph
is
using,
so
the
proposal
is
that,
starting
in
a
future
version
of
graph
node
graph
node
enforces
that
when
it
runs
a
subgraph
and
it
does
feature
detection
on
a
subgraph
that
the
feature
is
detected
while
running
the
sub
graph
matched
the
features
that
are
listed
in
the
subgraph
manifest
itself.
So
there's
going
to
be
like
one
either
a
features
key
for
probably
just
one
features,
key
of
features
that
are
included
in
that
in
that
subgraph.
D
Some
of
them
might
be
marked
as
experimental,
and
this
isn't
something
that
needs
to
impose
a
new
operational
overhead
on
dapp
developers
or
subgraph
developers,
because
this
is
logic
that
can
be
built
into
the
to
the
graph
cli.
So
this
already
happens
today
when
you
actually
like
the
subgraph,
manifest
that
most
people
have
in,
like
their
github
repos,
isn't
identical
to
what
actually
gets
published
to
the
network,
because
the
graph
cli
builds
that
sub
graph
and
publishes
the
subgraph
manifest
to
the
built
subgraph
manifest
to
ipfs.
D
So
the
graph
cli
can
do
the
exact
same
feature,
detection
as
that
the
graph
node
will
do,
and
it
can
augment
the
subgraph
manifest
with
the
the
correct
listed
features
and
what
that
what
that
does
is
it
gives
us?
You
know
a
source
of
truth
that
most
people
can
look
at
without
actually
having
to
run
a
graph
note
themselves
that
they
can
be
reasonably
sure
represents.
D
You
know
the
features
that
are
used
for
that
subgraph,
so
that
includes
consumers
that
includes
indexers
that
just
want
to
search
for
subgraphs
without
actually
like
loading
each
one
into
graph,
node,
first
to
run
feature
detection
and
then
having
that
accurate
list
of
features
gives
you
the
names
that
you
can
then
again
reference
this
matrix
and
see.
Okay.
D
What
features
in
the
network
you
know
is
this
going
to
be
supported
for
next
stage
of
the
process
is
basically
to
have
the
graph
counsel,
be
the
source
of
truth
for
both
the
feature
support
matrix,
as
well
as
the
canonical
graph
node
version.
D
So
the
idea
here
is
that
the
graph
council
you
know
today
already
has
been
voting
on
ggps.
You
know
we
just
went
through.
You
know
if
you
guys
haven't
checked
these
out
already
in
a
snapshot.
You
know
check
out
snapshot.
The
graph
council's
been
voting
on
graph
governance
proposals
specifically
for
protocol
upgrades.
D
You
know
the
graph
council
could
also
vote
on
a
proposal
that
doesn't
actually
result
in
any
on-chain
transactions,
but
simply
establishes
the
new
canonical
graph
node
version,
as
well
as
the
new
canonical
support
matrix
of
features
in
the
in
the
protocol,
and
this
becomes
really
important
for
the
arbitrator.
You
know
to
have
a
reference
for
which
we'll
get
into
in
the
next
gip,
but
that's
part
of
the
part
of
the
rationale
for
for
adding
clarity
and
defining
this
process.
Now.
D
The
next
step
of
the
process
is
a
recommendation
for
the
graph
node
core
developers,
which
is
diversion
graph
node
using
december
standards,
specifically
the
convention
that
between
major
versions
of
graph
node,
no
breaking
changes
should
be
added,
although
new
features
can
be
accreted,
and
so
this
means
that
between
major
versions.
So,
let's
say
hypothetically,
the
graph
council
votes
on
a
graph
governance
proposal
for
graph
node
version
1.0
and
they
say:
okay
1.0
is
the
canonical
version
of
the
you
know
of
the
subgraph
api.
D
Behavior
indexers
should
be
free
to
upgrade
to
any
one.x
version
of
graph
node
without
fear
of
being
slashed.
Due
to
inconsistent,
query,
attestations
or
inconsistent
inconsistent
like
proofs
of
indexing.
D
This
is
actually
still
quite
a
flexible
strategy
because
the
as
many
of
you
know,
the
subgraph
manifest
itself
can
be
version
bumped,
so
there's
a
spec
on
the
subgraph
manifest,
and
so
that
also
gives
you
like
another
outlet
for
sort
of
adding
new
functionality
or
even
changing
functionality,
while
keeping
the
graph
node
backwards
compatible
for
existing
sub-graphs
in
the
network.
So
that's
kind
of
the
convention
there
and
we'll
get
into
this
as
well.
D
But
you
know
if,
for
some
reason
like
a
bug
is,
is
written
that
does
introduce
a
backwards,
incompatible
change.
You
know,
inadvertently.
D
D
But
this
is
sort
of
the
foundational
part
of
that
and
then
the
last
part
of
this
process
is
optional,
but
I
think
it's
one,
that's
probably
practical
for
the
short
term,
which
is
that
the
graph
council
can
can
also
ratify
n
minus
one
support
windows
for
past
graph,
node
versions
and
so
very
likely
any
time
that
the
the
council
would
vote
on
a
new
canonical
version
of
graph
node
and
the
subgraph
api.
It
would
likely
not
be
immediately
effective.
D
It
would
probably
be
effective
as
some
few
as
of
some
future
block
or
some
future
epoch
in
the
protocol,
nonetheless,
having
a
knife
edge
rollout
of
graph
node,
especially
this
early
in
the
protocol.
When
you
know
some
subgraphs,
you
know
take
days
or
even
weeks
to
sync,
it
could
be
really
disruptive
to
the
protocol,
and
you
know
we're
not
trying
to
do
anything
that
creates
downtime
for
users
or
consumers
of
the
protocol
unnecessarily.
D
At
this
stage,
especially
as
we're
you
know
underway
with
the
the
subgraph
migration
you
know,
of
new
projects
rolling
out
their
their
subgraphs
into
into
production.
D
Eva
just
a
quick
process
check
am,
I
am
I
good
for
the
rest
of
the
agenda
on
this
car.
Is
there
a
time
we
need
to
leave
it
at
the
end.
D
D
Both
the
current
version
and
the
past
version
of
graph
node
could
be
considered
the
correct
behavior
and
so
that
kind
of
gives
indexers
a
little
bit
more
flexibility
in
upgrading,
while
making
sure
that
they
can
continue
submitting
proofs
of
indexing,
collecting
indexing,
rewards
and
accepting
queries
and
query
fees,
and
so
that's
kind
of
the
the
gist
of
this
proposal.
D
So
the
next
one
that
we
want
to
check
out
is
the
arbitration
charter,
and
so
this
has
been
published
already
in
the
forums.
I
know
some
of
you
were
having
trouble
accessing
this,
but
I've
also
posted
it
as
a
hack.
Indeed,
I
know
zorro.
I
guess
oliver,
I
should
say,
has
provided
some
feedback
on
this,
but
yeah.
I
encourage
the
rest
of
you
to
check
it
out.
D
So
the
goal
of
the
arbitration
charter
is
to
add
clarity
to
the
behavior
of
the
arbitrator
beyond
that
which
is
specified
in
the
smart
contract
code
itself.
Right,
so
this
is
actually
a
what
we
call
a
protocol
charter.
D
This
was
described
in
gip,
zero,
zero
one,
and
the
idea
is
that
you
know
the
graph
council
could
ratify
this
protocol
charter
and
this
actually
is
intended
to
bind
the
behavior
of
the
arbitrator,
meaning
that
if
the
arbitrator
is
found
to
not
be
in
compliance
with
the
arbitration
charter,
the
graph
council
is
making
an
implicit
commitment
to
reassign
or
remove
that
arbitrator.
You
know,
so
that's
the
that's
what
the
graph
council
ratifying
this
would
represent.
D
Should
they
choose
to
do
so.
So
just
a
quick
recap
for
for
anyone
on
the
call.
I
think
this
is.
I
think
everyone's
probably
familiar
with
this
at
this
point,
but
the
arbitrator's
role
is
to
decide
the
outcomes
of
disputes.
There's
two
main
types
of
disputes,
proofs
of
indexing
disputes
and
query
attestation
disputes
and
generally
we
call
the
people
that
submit
disputes,
fishermen
or
fisher
people.
D
I
guess
and
there's
a
number
of
different
kind
of
paths
for
for
detecting
those
errors
and
submitting
those
disputes
which
we
don't
need
to
get
too
deep
into
today.
So
I'm
just
gonna
jump
into
the
the
body
of
the
charter.
Each
of
these
sections
has
kind
of
its
own
rationale
that
I'll
try
and
walk
through.
I
know
it's
a
lot
of
text,
but
hopefully
I
can.
I
can
paint
a
good
narrative
here.
D
The
second
point
is
basically
just
what
I
described
that
you
know
the
council
can
remove
the
arbitrator
if
they're
not
complying
with
the
body
of
the
charter.
D
D
In
the
early
days
of
the
network,
the
arbitrator
has
the
ability
to
settle
the
dispute
as
a
draw
and
the
rest
of
the
art.
This
arbitration
charter
outlines
a
couple
ways
in
which
that
power
could
be
used.
The
first
one
I
already
alluded
to
this
in
the
last
gip
is
around
determinism
bugs.
So
these
have
happened.
You
know
so
the
edge
of
note
team's
been
running
the
graph
note
in
production.
At
this
point,
for
several
years,
we've
had
our
share
of
determinism
bugs
over
the
years,
sometimes
they're
hard
to
pin
down.
D
I
fully
expect
that
as
the
protocol
as
the
goal
of
the
protocol
is
to
continue
adding
value
for
users
and
adding
features
on
a
quick
cadence,
there
is
the
possibility
that
determinism
bugs
might
be
introduced
again
in
the
future.
So
in
the
cases
where
the
arbitrator
can
make
a
reasonable.
D
Assessment
that
a
proof
of
indexing
or
at
quarry
at
the
station
was
likely
incorrect
due
to
a
software
malfunction,
a
software
error,
some
kind
of
determinism
bug
they
have
a
discretion
to
settle
a
dispute
as
a
as
the
draw
one
thing,
I'm
gonna
jump
around
a
little
bit
here,
just
because
we're
talking
about
these
determinism
bugs,
but
one
thing
related
to
that
is
the
gip
that
our
rel
presented
earlier
about
the
separate
you
know
slashing
percentages
for
queries
and
indexing
like
it
said.
D
You
know,
slashed
or
disputed,
so
ariel's
gip
laid
the
foundation
of
establishing
separate,
slashing
percentages
for
these
proofs
of
indexing
and
proof
and
and
query
at
the
stations
to
not
make
the
burden
of
serving
queries
too
high.
Another
thing
that
we
do
is
we
set
a
maximum
allowable
slashing
for
query
disputes
over
a
given
allocation,
and
the
maximum
that's
proposed
in
this
arbitration
charter
is
that
an
indexer
can
only
be
slashed
for
queries
once
per
epoch
per
allocation.
D
So
so,
if
you
have
an
allocation,
that's
you
know
spans
28
epochs.
You
could
in
theory
be
slashed
28
times
and
again
we're
you
know.
We
want
to
parameterize
that
slashing
percentage
so
that,
even
with
the
arbitrator
exercising
discretion,
there's
very
little
chance
of
like
an
indexer
getting
wiped
out
or
significantly
you
know
harmed
due
to
like
a
determinism
bug
or
something
to
that
effect.
D
D
Another
thing
we've
put
in
to
kind
of
add
fairness
and
protect
indexers
is
put
in
double
jeopardy.
So,
right
now
the
query
attestation
structure
doesn't
have
any
it
doesn't
have
any
form
of
replay
protection.
D
So
the
same
query,
the
same
query,
body
and
query
result
will
produce
the
same
attestation
structure
every
single
time,
which
means
that
protocol
can't
yet
distinguish
between.
You
know
the
the
index
are
making
the
same
error.
You
know
10
times
versus
someone
just
submitting
a
single
error.
You
know
10
times
I'll
note
that
for
a
number
of
things
in
this
charter,
these
are
things
that
we
may
want
to
write
proposals
to
change
in
other
parts
of
the
protocol
in
the
future.
D
In
the
case
of
the
attestation
structure,
this
does
split
across
almost
every
code
base
in
the
system,
which
is
you
know,
substantial,
and
so
the
lower
risk
option
right
now
is
to
to
implement
the
double
jeopardy
rule.
And
then
you
know
we
can
work
on
a
you
know,
a
proposal
at
our
leisure
when
we
can
schedule
this
sort
of
lock
step
upgrade
of
the
the
out
to
station
structure
across
a
lot
of
our
our
code
bases.
D
Great
yeah,
so
let
me
speed
through
this.
Next
one
is
statute
limitations.
I
think
that's
pretty
straightforward.
Just
from
the
legal
analog.
The
goal
here
is
not
to
disadvantage
indexers
with
respect
to
attackers
right,
so
attackers
can
unstake
immediately
after
doing
an
attack,
but
honest
indexers
will
stay
online
and
keep
working,
and
so
it
doesn't
make
any
sense
any
for
indexers
to
be
on
the
hook
for
errors
that
they
committed.
D
You
know
past
a
certain
amount
of
time
in
the
past,
when
attackers
won't
be
afforded
that
risk
data
availability
just
describes
the
fact
that
the
arbitrator
can't
settle
any
dispute
where
the
data
to
settle
the
dispute
is
unavailable.
I
think
that's
pretty
clear.
In
general,
the
fischer
people
should
have
the
incentive
to
make
sure
that
data
stays
available
so
that
they
can
get
their
fishermen
reward,
but
in
the
meantime
the
arbitrator
will
settle
those
as
a
draw.
D
I
we
talked
about
this
one
about
max,
maximizing
the
or
setting
the
cap
on
the
amount
of
slashing
valid
proofs
of
indexing
for
a
given
epoch.
This
basically
relates
to
the
gip
we
just
went
through.
D
So
that's
that's
kind
of
the
dependency
for
this
for
this
section,
but
it
all
additionally
specifies
that
when
a
indexer
is
submitting
a
proof
of
indexing,
the
correct
proof
of
indexing
is
the
one
for
the
first
epoch
of
the
excuse
me,
the
first
block
of
the
epoch,
in
which
the
allocation
is
closed,
with
the
caveat
that,
because
it's
unpredictable
when
a
transaction
is
going
to
get
mined,
if
an
indexer
submits
a
proof
of
indexing
for
the
first
block
of
the
previous
epoch,
that
would
all
that
would
be
settled
as
a
draw
it
wouldn't
it
would
be
forgiven,
even
though
it's
not
technically
correct.
D
The
arbitrator
is
on
the
hook
for
settling
disputes
in
a
timely
manner,
and
the
rest
of
this
gip
is
basically
the
motivation
and
rationale
that
I
kind
of
walked
you
through,
as
we
were
walking
through
the
the
sections.
So
we've
already
gotten
some
good
feedback
on
this
in
the
forums.
This
will
probably
likely
go
through
some
updates
before
it's,
you
know
considered
in
a
more
complete
state.
I
encourage
you
to
take
a
look
at
this.
D
The
next
x
for
this
gip
would
for
the
would
be
for
the
graph
council
to
discuss
it
and
ratify
it,
using
a
a
gp
once
once.
They
feel
that
the
community
is
at
a
you,
know,
reasonable
level
of
like
consensus
and
understanding
on
the
proposal.
So
I'll
stop
there.
I
know
we've
covered
a
lot,
but
we've
got
about
five
minutes
or
so
for
for
questions.
A
We've
got
a
question
here
from
sam
green.
What
is
the
expected
turnaround
time
for
arbitration.
D
So
what
the
I
believe,
what
the
charter
says
is
that
they
should
attempt
to
settle
it
within
a
thawing
period,
the
idea
being
that
if
it
is
an
attacker,
they
would
you
know,
presumably
unstake
right
after
their
attack-
and
you
know
we
want
to
make
sure
that
all
disputes
are
settled
within
a
thawing
period.
D
D
So
technically
slashing
has
always
been
allowed.
It's
the,
I
think
the
communication
early
on,
maybe
even
before
the
protocol
was
launched,
that
was
that
the
arbitrator
would
exercise
discretion
in
doing
that
and
I
and
that
was
never
defined.
What
discretion
meant.
This
charter
is
meant
to
add
a
little
bit
more
clarity
to
that,
but
slashing
has
been
enabled
in
the
protocol
from
from
day
one,
and
I
think
the
arbitrator
if
it
were
to
see
you
know
past
malicious
activity
within
the
statute
of
limitations
defined
in
this
charter.
A
I
have
one
question
you
know,
so
you
talked
a
lot
about
some
of
the
upcoming
changes.
We
also
have
migration
going
on
in
your
opinion.
What
do
you
think
indexers
can
do
right
now
to
get
best
prepared
for
what's
coming.
D
Great
great
question:
well,
we
have
a
migration
workshop
tomorrow,
where
we're
going
to
be
focusing
the
whole
workshop
on
disputes
and
arbitration,
so
we'll
be
going
through.
Some
of
the
we'll
be
reviewing
some
of
what
we
discussed
in
this
town
hall
today
with
respect
to
the
charters,
but
we'll
actually
be
going
a
lot
deeper
into
the
tooling.
So
ford
has
been
working
a
lot
on
the
indexer
agent
and
specifically
a
feature
we
call
poi
cross
checking.
D
So
this
is
the
way
that
honest
indexers
in
the
protocol
can
automatically
detect
bad
pois
from
other
indexers
and
it
flags
them
for
manual
review
and
arielle
has
been
working
on
a
cli
tool
for
taking
that
data.
That's
output
by
the
manual
review
and
submitting
a
dispute
on
shane,
and
so
tomorrow
we'll
be
going
through
that
in
depth
and
yeah.
I
hope
to
see
you
all
there.
A
No,
it
looks
like
we're
good,
so
thank
you,
everyone
for
joining
us
again.
Oh,
we
have
one
one
final
question:
semi-unrelated:
does
the
graph
have
web
sockets.
D
So
the
I'm
not
sure
what
part
of
the
system
you're
referring
to
the
graph
does
or
a
version
of
the
graph
node
supported
subscriptions.
I
believe
it's
been
deprecated
and
not
used
websockets,
I'm
not
sure
what
other
parts
of
the
system
might
use.
Websockets
the
state
channels
used
nats.
F
D
Yeah,
so
the
subscriptions
used
used
web
sockets,
they're
generally
not
recommended
for
for
usage
a
they're
not
yet
supported
in
the
decentralized
protocol.
So
we
encourage
people
to
use
polling
for
now
as
a
strategy
and
they're
stateful.
So
they
you
know
if
you
were
to
try
and
use
them
in
the
decentralized
network.
It
requires
you
to
have
an
ongoing
relationship
with
a
single
indexer,
which
you
know
we're
kind
of
trying
to
encourage
more
of
a
many-to-many,
real-time
marketplace
between
consumers
and
indexers.
A
And
we've
got
a
follow-on
question
for
front-end
web
development.
Is
there
a
way
to
add
event-based,
I'm
speaking
subscriptions
or
querying.
D
Yeah,
I'm
not
totally
clear
on
the
question,
but
you
are
able
to
define
subgraphs.
If
I
understand
the
question
correctly,
where
the
entities
correspond
to
events,
we've
done
this
for
some
of
our
like
analytics
use
cases
internally
and
you
could
pull
for
those
events,
so
you
could
have
a
subgraph
that
you
pull
just
to
get
real-time
events
so
yeah.
Another
question
here
is:
will
non-technical
users
be
able
to
be
fishermen
so
for
the
indexing
proofs
of
indexing?
D
The
goal
is
for
users
to
be
fishermen
and
the
way
that
that
would
work
is
either
using
the
gateway,
which
is
what
most
end
users
will
use
today
or
using
a
query
engine
running
on
their
local
machine.
The
query
engine
based
on
the
user's
parameters
can
periodically
cross
check
results
against
multiple
indexers
and,
if
it
ever
spots,
inconsistent
results
from
two
indexers.
For
the
exact
same
query,
you
know
that
at
least
one
of
those
indexers
is
slashable
right,
because
that
those
results
should
agree
with
one
another.
D
And
then
there
will
be
a
ui
for
end
users
to
submit
disputes.
In
that
case,
to
to
slash
to
slash
indexers
and
so
yeah
that
that's
something
that
a
non-technical
user
could
do.