►
From YouTube: The Graph’s Town Hall #4 Jun 1st, 2021
Description
The Graph’s Community 4th Graph Protocol Townhall
This video was recorded: Tuesday, June 1 @ 8am PST, 2021.
A
Welcome
everyone
to
the
fourth
protocol,
town
hall.
We
are
so
happy
to
see
so
many
faces
joining
us
every
month
and
to
kick
it
off
today
I'll
be
giving
an
update
just
on
the
foundation.
So,
as
you
might
know,
the
last
few
months
have
been
focused
on
migration,
we're
you
know
well
on
our
way
into
phase
two
and
phase
three
and
so
you'll
be
hearing
more
public
updates
about
that
shortly.
A
We
also
wanted
to
announce
sort
of
a
new
initiative
that
we're
doing,
which
is
a
wave
of
rfps.
So
we
did
our
wave
one
of
grants.
Last
quarter
we'll
be
doing
a
wave
two
of
grants,
this
quarter
as
well,
but
we
also
have
a
ton
of
rfps
we'd
love
to
share
with
you.
So
I'll
just
quickly
share
my
screen,
and
so,
as
many
of
you
know,
the
graph
foundation
website,
the
graph.foundation
is
a
notion
page.
That's
quite
interactive.
A
This
is
our
interim
page,
but
we
will
be
launching
an
rfps
page
today,
where
we
have
over
40
projects
that
we
would
love
to
see
our
community
build,
and
I
can
go
into
each
of
these.
So
you
know
to
start
in
the
dapps
category,
we're
looking
for
someone
to
build.
A
You
know
a
simple
ui
to
help
our
governance
process
and
streamline
the
information
flow
with
the
radical
gips
and
also
our
forum,
we're
looking
for
someone
to
create
a
curator
rewards
calculator
similar
to
the
ones
that
exist
for
delegators
and
similar
to
other,
defy
analytics
stats
that
query
subgraphs.
We
really
want
to
create
an
nft,
ui
or
dap
that
can
make
the
nft
community
much
more
rich
with
sharing
data
within
the
subgraphs.
A
We
have
actually
about
20
subgraphs
that
we'd
love
to
see
built.
These
have
been
requested
by
a
lot
of
top
tier
projects
and
they're
looking
for
high
quality
subgraphs,
you
know
that
could
take
anywhere
from
30
to
45
days
to
get
to
full
completion
so
head
on
over
to
see
which
ones
are
here.
If
you
happen
to
be,
you
know
an
expert
in
that
protocol.
We'd
love
to
work
with
you
to
get
that
finished
within
the
community
building.
A
We
have
quite
a
few
things
around.
You
know
educating
the
community
more
about
our
own
initiatives,
so
like
taking
notes
for
the
protocol
town
hall,
weekly
educational
memes
about
the
graph,
so
that
might
be
educating
about
specific
users-
or
maybe
you
know,
specific
experiences
like
indexing
or
curating
on
the
network.
We'd
also
love
to
see
a
web3.
What
we're
calling
bits
and
bytes
newsletter
similar
to
you
know
robinhood,
snacks
or
other
communities.
We
want
to
start
creating
more
bite-sized
content
around
web3
and
what's
being
developed.
A
We
also
have
these
two
community
building
rfps
that
we're
excited
about.
One
is
an
economic
report
similar
to
past
reports
that
were
not
made
on
ethereum
2.0,
assessing
the
economics,
certain
trade-offs
for
users
and
helping
indexers
and
curators
basically
be
informed
about
the
economics
of
participating
in
the
protocol,
and
we
also
have
one
for
creating
documentation
for
hard
hat.
So
currently
our
documentation
supports
truffle,
but
we've
gotten
a
lot
of
requests
for
hard,
hard
docs
within
the
index
or
tooling.
This
is
the
largest
category,
so
would
love.
A
You
know
you
indexers
to
help
submit
a
proposal
or
help
other
teams
build
this
out,
but
we've
got
quite
a
few
tools
that
can
help
indexers
just
better
assess
their
own
infrastructure
needs
how
to
meet.
You
know
certain
query
traffic
and
basically
be
ready
for
the
mainnet
migration
and
you
can
take
a
look
into
each
of
these.
So
one
of
them,
being
you
know,
a
query
traffic
simulator
to
help
indexers
understand.
You
know
how
to
scale
their
infrastructure
on
test
net.
Another
is,
you
know,
load
testing
ethereum.
You
know.
A
Lastly,
we've
got
a
few
grants
or
sorry
rfps
here
for
the
indexer
community,
so
we'd
love
to
create
more
high
quality
content
that
can
get
new
indexers
or
node
operators
from
0
to
100.
So
one
of
them
is,
you
know,
an
indexer
all-in-one
video
tutorial
series.
This
would
go
through
step
by
step
of
how
to
deploy
a
graph
node.
A
Another
one
that
I'm
really
excited
about
is
postgres
consulting
and
workshops.
We
got
some
advice
that
you
know.
A
lot
of
indexers
would
really
gain
from
having
more
postgres
expertise,
and
so
we're
looking
for
postgres
experts
in
our
community
to
come,
help
indexers,
provide
mentorship
and
develop
guides,
so
they
can
optimize
their
infrastructure.
A
So
please
keep
in
touch
we'll
be
sharing
these
on
discord
twitter
in
the
forum
later
today,
and
we
hope
to
see
your
proposals.
B
Thanks,
eva
hi,
everyone
yeah,
it's
really
nice
again
to
see
a
lot
of
you
familiar
faces,
but
for
those
who
don't
know
me
and
are
joining
us
for
the
first
time
my
name
is
rheem
and
I'm
the
ecosystem
manager.
B
Here
at
the
foundation,
I
oversee
a
lot
of
the
grants
and
projects
that
are
being
streamlined
here
and
it's
always
my
absolute
pleasure
to
highlight
our
great
our
grantees
with
this
grantee
spotlight
as
a
reminder
for
those
of
you
that
are
wondering
if
you
can
still
apply
for
a
grant,
applications
are
still
open
for
wave
two.
So
definitely,
if
you
have
any
ideas,
put
them
through
and
we're
happy
to
meet
with
you
and
for
everyone
that
already
has
applied.
B
We
do
definitely
appreciate
your
patience
while
we
go
through
all
of
the
interviews
and
and
follow
up
with
the
process
moving
forward,
but
without
further
ado,
I
do
want
to
hand
over
the
baton.
To
michael
michael's
grant
is
developing
something
called
skydocks,
but
I'm
going
to
leave
it
to
him
to
go
into
that
more
in
depth.
C
Hi,
I'm
michiel,
I'm
a
developer
from
the
netherlands.
I
will
give
you
a
short
update
on
the
progress
of
my
grant.
I'm
working
on
skydocks
skydogs
is
a
decentralized
google
docs
alternative
that
uses
saya
for
storage.
Saya
is
a
decentralized
cloud
storage
platform
and
they
have
created
skynet.
It's
a
content,
delivery
network
on
top
of
the
scion
network
and
they
provide
easy
apis
that
you
can
use
in
your
apps
or
decentralized
apps.
C
C
As
part
of
the
grant,
I
have
developed
a
meta
mask
login
option
when
you
use
that
you
will
be
asked
to
sign
a
string
to
prove
you
actually
own.
Your
ethereum
address,
after
that
you
can
log
in
I've
already
signed
that
string
with
metamask
it's
it's
encrypted
and
stored
in
your
local
storage,
in
your
isolated
storage
in
your
browser.
So
now
I
can
just
easily
log
in
after
that
you
are
presented
with
a
simple
screen
where
you
can
see
your
documents.
C
C
It's
pretty
basic
app,
but
it
works
decentralized.
That's
that's
pretty
cool
with
the
the
grant
I
received
from
the
graph
I've
given
the
app
a
whole
new,
updated
user
interface
interface
and
you've
developed
the
metamask
login.
But
of
course
I'm
not
finished
yet
I'm
currently
working
on
integrating
the
app
with
the
graph
you
will.
You
will
be
able
to
share
documents
using
a
sub
graph
and
then
you
can
see
which
documents
are
shared
with
you.
C
I
think
we
are
currently
at
a
great
point
in
time
it's
starting
to
become
possible
to
develop
real
distributed.
Apps.
Some
infrastructure
is
ready,
but
a
lot
is
being
developed
and
we
are
still
super
early.
Everybody
is
building
and
there
are
new
things
coming
out
every
day.
That's
great,
and
I
think
it's
really
important
to
contribute
to
that
and
that's
why
all
of
the
tools
I'm
developing
also
as
part
of
this
grant,
can
be
used
as
building
blocks
for
other
apps.
C
C
C
The
skynet
api
to
communicate
with
cia
skynet
is
also
open.
Source
can
also
be
used
by
other
developers,
and
I
am
already
working
on
a
sub-graph,
it's
needed
for
skyrocks,
but
I've
made
it
more
generic,
so
other
developers
can
also
use
that
I've
named
it
the
share
it
network
and
it's
a
smart
contract.
You
can
send
your
app
id
to
an
data.
You
want
to
share.
C
It's
a
generic,
zip
graph,
so
other
people
can
might
be
able
to
use
it
too,
and
I'm
also
going
to
develop
a
c-sharp
sdk
to
communicate
with
the
graph
apis.
That's
also
a
building
block
for
other
developers.
C
I
think
I
think
it's
really
important
to
help
this
space
forward
and
other
developers
can
you
can
build
new
distributed
apps
that
are
even
better
than
mine,
and
that's
that's.
What's
what
I
wanted
to
show
you
thanks
for
having
me.
B
Thank
you,
michael
that
was
awesome
and
then
for
this
next
one.
This
is
a
community
grant.
Unfortunately,
they
can't
be
here
because
they're
actually
in
a
class
right
now,
but
they
did
share
a
message
with
us
via
a
video
recording
I'll,
probably
be
skipping
through
some
of
the
grant
itself
just
to
kind
of
save
some
time,
but
we'll
definitely
share
the
link
for
you
to
view
it,
after
so
without
further
ado.
D
Hi
everyone,
my
name,
is
alona,
I'm
from
belarus.
Thank
you
so
much
for
inviting
me.
I
really
appreciate
that.
I'm
part
of
such
great
craft
community
from
the
early
beginning,
I've
been
test
night
creator,
delegator
and
first
paid
grantee,
and
now
my
project
is
the
creation
of
educational
videos
in
english
and
russian
language,
to
educate
users
about
the
protocol
and
how
to
take
part
different
activities.
D
We
strive
to
cover
both
technical
and
non-technical
topics,
but
the
main
idea
to
show
people
that
these
topics
are
really
interesting
and
no
need
to
be
afraid
of
such
terms
as
web
3.0,
graphql,
subgraph
depth
and
so
on.
We
try
to
talk
in
simple
language
about
conflicts,
but
it's
better
to
see.
Once
and
now
I
will
show
you
some
fragments
from
my
videos
here
we
go.
D
B
Awesome
and
that's
it
for
today
we'll
be
sharing
these
resources
after
the
town
hall,
so
you
can
view
the
video
in
its
full
glory,
but
we're
definitely
really
excited
and
impressed
by
all
of
this
art
here
today.
Back
to
you.
E
Yeah,
absolutely
thanks
wow,
it's
really
great
to
see
those
so
yeah
I'll
start
off
with
some
housekeeping.
Since
the
last
protocol
town
hall,
we've
had
a
couple
minor
upgrades
to
the
protocol.
The
first
one,
actually,
I
think,
took
place
the
same
day
as
the
last
protocol.
E
The
graph
ecosystem
uses
snapshot
voting
both
for
community
polling
as
well
as
for
as
well
as
for
the
graph
council
votes.
So
if
you
go
to
snapshot.org,
you
can
search
for
the
graph
council,
and
you
can
see
you
know
precisely
the
vote
that
led
to
that
upgrade.
You
can
see
who
voted
for
it
as
well,
as
you
know,
specifically
what
gip
and
the
gips
repo
and
like
what
the
latest
commit
and
that
repo
was
for
that
upgrade
so
that
that
completed
about
a
month
ago,
everything
went
smoothly.
E
We've
had
another
miner
and-
and
that
was
for
this
gip
just
for
those
that
are
curious.
Gip
three,
which
we've
discussed
in
previous
town
halls
next
minor
upgrade,
was
just
simply
a
parameter
change.
This
was
more
about
allow
listing
a
contract
that
we
call
the
allocation
exchange
contract.
E
This
is
part
of
the
roadmap
to
full,
vector,
plus
scalar
state
channels
on
mainnet.
This
basically
unlocks
the
scalar
side
of
that
for
the
current
subgraph
migration.
E
So
this
is
a
contract
that
most
folks
probably
will
not
interact
with
it's
sort
of
a
a
you
know:
a
stop
gap,
if
you
will
that
you
know
primarily
edge
and
nodes
funds
will
be
placed
into,
but
yeah
there's
a
second
gip.
Actually,
I
believe
it's
already
been
shared
for
the
withdrawal
helper,
which
is
the
contract
that
does
this
kind
of
same
thing
except
for
vector.
E
So
that's
something
to
stay
tuned
for
that
you
should
see.
You
know
talk
about
in
the
forums
and
and
maybe
a
council
vote
on
in
the
near
future,
so
those
are
kind
of
the
two
that
are
related
to
the
you
know.
The
current
migration-
I
think
I
mentioned
in
the
previous
town
hall-
that
we've
also
done
a
lot
better
job
about
booking
auditors
time,
which
means
that
we're
getting
we're
now
getting
a
regular
scheduled.
E
You
know
basically
retainers
for
auditors
each
month,
so
we've
been
able
to
opportunistically
get
some
more
fixes
into
the
pipeline
as
well.
So
I'm
going
to
kick
it
off
to
rel,
actually
to
talk
about
a
few
minor
changes
and
upgrades
to
the
protocol
that
you
can
expect
gips
for
soon
based
on
as
kind
of
opportunistically
using
those
those
audit
windows.
F
Hello
good
to
to
see
you
here
again
in
a
in
a
town
hall,
I'm
going
to
share
my
screen
a
bit
wait
here.
F
F
If
you
want
to
see
it,
the
the
main
reason
for
having
these
changes
to
balance
the
risk
of
indexing
and
and
serving
queries
like
serving
qualities
are
more
like
frequent
activity,
and
indexing
is
like
something
that
you
do
when
you
allocate
and
the
and
the
slashing
percentage
is
the
same
for
each
of
these
activities.
So
it's
good
to
have
a
governance
way
to
balance
these
two
activities
and
by
having
these
two
percentages,
we
basically
the
community
and
the
governance
can
do
that.
F
So
this
is
already
explained
in
in
in
one
of
the
in
the
jp
number,
six
that
you
can
look
in
the
in
the
forum.
F
The
gap
seven
is
related
to
what
branch
brandon
was
mentioning
about
the
vector
and
connecting
it
the
channels,
the
heat
channels
to
the
contracts,
this
withdrawal
helper
is
it's
another
contract
that
has
some
custom
logic,
so
we
can
send
funds
from
the
state
channels
back
to
the
protocol
for
for
being
distributed
to
the
delegators
to
the
indexer
and
the
curators.
F
Then
there
are
some
minor
fixes
already
audited
and
merged
in
these
different
pr's
with
these
numbers.
One
is
I
already
I
talked
about
this
in
a
previous
tone
hall,
but
I
would
quickly
describe
them
one.
The
first
one
delegation
parameters,
initialization,
it's
a
bug
fix
that
is
related
to.
If
some
indexer
is
using
stake,
two,
instead
of
a
stake,
they
might
forget
to
initialize
the
delegation
pool
with
the
current
correct,
indexer
cuts.
F
So
that's
fixing
that
there's
a
second
fix
that
is
related
to
when
you're
an
indexer
is
closing
an
allocation
with
an
empty
poi.
The
an
update,
rewards
snapchat
was
not
being
called,
so
there
could
be
like
a
small
difference
in
the
calculation
in
those
cases
and
the
third
one.
That
is,
I
think,
quite
interesting.
Is
it's
an
improvement
basically
reading
like
a
share
of
of
addresses
for
each
of
the
of
the
of
the
contracts
in
the
network?
F
So
we
avoid
fetching
all
these
addresses
from
the
controller.
This
is
a
contract
that
is
sort
of
a
registry
of
all
the
addresses,
and
that
way
we
can
save
some
gas.
Some
like
benchmarks,
like
I
did,
goes
from
five
to
percent
to,
I
would
say,
15
depending
on
the
on
the
transaction.
F
So
I
it's
quite
interesting
to
improvement.
The
then
about
some
future
research.
There's
some.
There
was
a
proposal
in
the
in
the
community
about
like
having
a
more
stable,
indexer
cut
that
will
help
both
indexers
and
delegators,
particularly
because
indexers
won't
need
to
change
the
the
the
cuts
like
like
often
if
it
delegates
like
if
they
get
many
delegations.
F
So
the
idea
is
to
get
a
more
stable
cut.
There's
also
research
about
snapshot,
getting
a
snapshot
of
the
of
the
indexer
cut
before
the
allocation
is
created.
So
it's
not
changed
like
right
before
the
location
is
closed.
So
I
I
just
went
commenting
this
because
I'm
I'm
like
looking
into
this.
I
think
it's
a
very
interesting
proposal
and
I
I
like
invite
you
to
add
more
comments
into
the
into
the
discussion.
F
E
Yeah
we'll
get
into
the
dispute
stuff
in
a
second
but
yeah
that
last
one
in
particular
is
really
interesting.
That
was,
I
think,
first
put
forth
in
the
very
first
protocol
town
hall
by
gavin
from
from
figment,
and
so
it's
really
cool
to
see
some
of
these
kind
of
community
ideas
kind
of
taking
shape
so
yeah.
So
I
wanted
to
give
an
update
on
the
arbitration
charter.
E
Finally,
that's
kind
of
one
of
like
the
big
gips,
that's
in
the
in
the
pipeline
and
a
lot
of
the
gips
that
you've
seen
some
of
them
are
kind
of
dependencies
for
the
arbitration
charter.
E
So
I
I
believe
we
talked
about
this
in
the
last
protocol
town
hall,
so
I'm
not
gonna
go
through
it
in
depth.
Other
than
to
say
that
I
you
know,
please
join
the
arbitration
charter
discussion
in
the
forums.
We've
gotten
a
lot
of
really
great
feedback
from
folks,
especially
as
we're
seeing
the
first
disputes
take
place
in
the
protocol.
I
think
you
know
I've
heard
from
a
lot
of
indexers
that
now
this
stuff
is
kind
of
starting
to
feel.
You
know
real
one
of
the
updates
that
we've
made
to
the
charter.
E
E
That
was
already
in
progress
and
still
collect,
rewards
or
if
they
would
have
to
close
that
allocation
with
the
zero
poi
and,
basically,
you
know
forego
getting
rewards
for
that
for
that
allocation,
and
we
basically
put
in
an
allowance
that
allowed
indexers
to
complete
that
allocation,
which
means
that,
when
something
that
happens
outside
of
their
control,
which
is
you
know,
the
subgraph
failing,
you
know,
they're
not
kind
of
punished
for
it
and
they
have
some
time
to
react
where
they're
still,
you
know
they're
still
earning
rewards
for
that
allocation.
E
So
there's
a
lot
here.
Obviously
there's
a
lot
of
sections.
I
highly
encourage
you
to
read
this.
All
the
gips
by
the
way
are
hosted
on
radical,
but
we've
also
for
convenience.
You
know,
as
radical,
builds
out
a
lot
a
lot
of
the
pieces
of
their
ui.
We've
also
been
posting
stuff
in
hack
and
d.
In
the
forums,
so
it
should
be
pretty
easy
to
kind
of
go
through
and
and
read.
What's
what's
in
these
proposals,
the
arbitration
charter
specifically
was
depending
on
a
few
different
gips
that
were
in
progress.
E
The
last
time
we
spoke
arielle
just
spoke
to
one
of
them,
which
is
the
separate
indexer
inquiry
slashing.
So
this
was
just
about
being
able
to
actually
tune
these
things
differently,
and
that
was
important
for
some
of
the
logic
that
the
arbitration
charter
defined
around
how
to
balance
the
risks
of
indexing
and
query
slashing
right,
because
you
know
zarya
mentioned
indexers,
respond
to
an
unbounded
amount
of
queries
during
an
allocation,
but
just
submit
a
single
proof
of
indexing,
and
so
the
risk
profile
is
is
very
different
and
it
needs
to
be.
E
It
needs
to
be
tuned
differently,
another
one
that
was
in
progress.
I
believe
the
last
time
we
spoke
was
the
deterministic
timeouts,
so
deterministic
timeouts
is
something
that
we
need
for
the
need
for
the
protocol,
so
that,
if
a
subgraph,
for
example,
enters
like
you
know,
an
infinite
loop
or
gets
stuck
in
some
other
way
that
all
the
indexers
kind
of
see
that
event.
At
the
same
time,
right
like
we,
you
know
for
everything
in
the
protocol.
E
We
need
to
make
sure
things
stay
deterministic,
and
so
for
that
we
borrowed
an
idea
that
most
of
you,
I'm
sure,
are
very
familiar
with,
which
is
the
idea
of
gas
costing.
You
know
the
op
codes
in
you
know
in
in
our
run
time
for
the
subgraph
so
leo,
I
believe,
introduced
this
last
protocol
town
hall,
I'm
not
going
to
go
into
ad
nauseum
other
than
to
say
the
gip
is
out
now
and
you
can
go
through
and
you
can
actually
see.
E
You
know
how
all
these
op
codes
are
being
priced
in
the
subgraph
mappings.
The
goal
here
for
this
first
gip
was
to
be
correct
on
an
order
of
magnitude
basis
right.
The
main
goal
is
to
make
sure
those
timeouts
are
deterministic
not
to
make
sure
that
every
op
code,
you
know,
is
perfectly
perfectly
priced.
They
just
kind
of
need
to
be
roughly
correct
so
that
these
timeouts
are
deterministic.
E
Future
gips
might
build
on
this
work
and
actually
try
and
make
those
op
codes,
the
the
gas
costing
of
those
op
codes,
an
accurate.
You
know
highly
accurate
predictor
of
like
execution
time,
which
would
be
a
really
useful
feature,
I
think,
for
other
parts
of
like
the
sub
graph,
but
that
that's
kind
of
future
work
that
we'll
build
on
this.
E
The
last
one
I
want
to
touch
on
is
the
api
versioning
support.
This
is
something
that
was
work
in
progress.
Last
time
we
spoke,
I
don't
believe
there
was
a
gip
out.
This
was
the
final
dependency
for
the
arbitration
charter,
and
this
was
all
about.
How
do
we
know
at
a
given
point
in
time
what
the
valid
subgraph
api
behavior
is
with
respect
to
the
protocol
right,
a
lot
of
the
protocol
depends
on
determinism.
E
E
The
rest
of
the
protocol
rests
on
and
so
a
it's
important
that
indexers
actually
know
what
the
correct
behavior
of
the
subgraph
api
is,
and
then
secondly-
and
this
is
a
little
bit
more
subtle-
but
you
know
the
graph
as
an
ecosystem
and
as
a
project
kind
of
has
two
goals
that
it
that
it
sort
of
wants
to
optimize
for
in
parallel.
One
is
that
it
wants
to
push.
You
know
the
usage
of
the
decentralized
network
as
much
as
possible
right.
E
We,
you
know
kind
of
edgy
notice,
doing
this
migration
right
now
of
subgraphs
from
its
own
hosted
index
or
to
the
decentralized
network,
that's
kind
of
where
you
know
all
the
tooling
and
that's
where
all
the
indexers
are
right
now,
so
that's
kind
of
where
we
want.
You
know
the
growth
and
query
volume
and
usage
to
happen,
as
opposed
to
you
know
where
it's
been
happening.
E
You
know
thus
far,
but
the
other
side
of
that
is
that
we
also
want
to
add
capabilities
to
the
decentralized
network
as
quickly
as
possible,
and
you
know
something
that
we
learned
in
the
you
know
in
the
last
two
three
years.
You
know
when
edge
and
node
was
you
know,
primarily
the
the
core
developers
on
the
network.
Was
that
a
lot
of
times
these
features
they
follow
kind
of
a
life
cycle
right.
They
follow
kind
of
a
pipeline
where
you
know
the
very
first
time
you
implement
a
feature.
Maybe
it's
like
experimental.
E
You
know,
then
maybe
the
api
is
solidified,
but
then
you
know
you're
not
quite
sure
if
it's
deterministic
yet
and
then
maybe
you
get
it,
you
know
to
a
level
that
you
feel
pretty
comfortable
that
it's
deterministic,
but
you
haven't
done
all
the
integration
testing
and
you
know
made
sure
that
you
know
for
sure
it's
deterministic
and
so
and
a
lot
of
times
there's
research
involved.
You
know
in
that
pipeline
to
see
you
know
how
to
make
something
that
you
know
normally
wouldn't
be.
E
Deterministic
actually
behave
deterministically
and
that
happens
kind
of
per
feature
right
and
so
the
way
that
the
protocol
kind
of
works
today
is
that
you
know
for
many
of
the
features
the
in
the
protocol
like,
for
example,
indexing,
rewards
or
arbitration
of
you
know,
indexing
disputes.
E
It
relies
on
a
level
of
determinism,
and
so
what
we're
doing
in
this
in
this
gip
is
not
only
proposing
a
method
of
understanding.
What
is
the
canonical
subgraph
api
behavior,
but
we're
also
specifying
what
is
the?
What
is
the
interaction
or
what
is
the
intersection
of
subgraph
api
features
with
supported
protocol
features
and
it'll
be
a
little
easier.
E
If
I
can
show
you
just
an
example,
I'll
kind
of
jump
towards
the
end
here,
I
think
I've
got
a
table,
and
this
became
particularly
salient
during
the
during
the
recent
subgraph
migration,
because
one
of
the
subgraphs
that
was
migrated
had
a
feature
that
previously
had
not
been
not
been
supported
as
fully
deterministic
and
indexers
were
kind
of
you
know,
scrambling
are
like
well,
do
we
index
it?
Do
we
not
index
it
like?
Do
we
submit
proofs
of
indexing?
E
Do
we
not
submit
proofs
of
indexing
and
there
wasn't
a
lot
of
clarity
in
like
what
the
supported
behavior
would
be
for
a
subgraph
such
as
that
that
had
these
specific
features
that
were
non-deterministic,
and
so
the
idea
of
this,
this
gip
is
that
you
could
look
up
named
subgraph
features
that
we've
defined
and
we've
kind
of
bucketed
them
into
several
categories,
core
features
which
kind
of
apply
to
all
sub-graphs,
regardless
of
the
network,
they're
indexing
or
something
like
that,
and
then
you
have
data
source
types
themselves.
You
know
which
are
you
know
like?
E
Do
you
even
support
indexing
ethereum,
or
do
you
support
indexing?
You
know
near
or
arbitrarium
or
whatever,
and
then
you
have
data
source,
specific
features
that
are
listed
underneath
that
and
you
can
kind
of
see
this
full
support,
matrix
of
being
defined.
Where
you
know
you
can
see
a
is,
you
know
the
feature
implemented
and
you
know,
and
if
a
feature
is
implemented
it
automatically
makes
it
avail.
E
You
know
eligible
for
you
know,
queries,
query
fees,
agora,
cost
models
is
the
feature
experimental
and
I'm
going
to
put
a
pin
in
this
and
come
back
to
it
in
a
second
and
then
does
it
support
disputes?
So
does
it
support
query
disputes?
Does
it
support
indexing
disputes?
Does
it
support
indexing
rewards
right,
so
we
only
want
to
have
the
protocol
pay
out.
E
Indexing
rewards
on
subgraphs
that
have
deterministic
indexing,
because
we
want
to
make
sure
that
indexers
are
being
compensated
for
real
work,
important
to
note
and
I'll
touch
on
this
further
up
in
the
gip.
Is
that
not
all
features
that
have
non-deterministic
querying,
such
as
full
text
search?
So
you
see
here
that
we
have
a
no
for
query
dispute
arbitration.
E
Not
all
features
that
have
non-deterministic
querying
also
have
non-deterministic
indexing,
so
actually
a
subgraph
like,
I
believe
it
was
the
omen
subgraph
that
was
migrated
in
the
in
the
recent
subgraph
migration.
It
could
actually
support
query
fees.
It
would
not
support
query
dispute
arbitration,
but
it
for
those
queries
that
involve
full
text
search,
but
it
would
support
indexing,
rewards
and
indexing
dispute
arbitration,
and
so
this
was
the
sort
of
clarity
that
was
missing
from
you
know
how
these
support
matrices
would
play
out
in
the
real
world
the
kind
of
inspiration
for
this.
E
Just
for
those
that
you
know
kind
of
come
from
like
the
web
development
background,
you
know
was
kind
of
can
I
use,
which
was
this.
I
think,
like
really
amazing
website
that
I
think
is
still
widely
used.
That
you
know
would
tell
you
at
any
given
point
like
what
feature
of
the
web
could
you
use
on
which
browsers
right
and
so
intent?
The
the
goal
here
is
a
little
bit
different,
but
it's
a
similar
sort
of
resource
where
you
can
understand.
E
Okay,
I'm
using
this
I'm
building
this
subgraph
either
as
a
developer
or
as
an
indexer
that
wants
to
index
it,
and
I
see
the
features
that
are
included
in
it
and
and
then
I
see
exactly
how
those
are
supported
in
the
in
the
protocol
and
what
that
allows
us
to
do
is
it
allows
us
to
develop
the
protocol
very
rapidly
and
keep
adding
features
and
then
let
each
of
those
features
sort
of
granularly
move
through
this.
You
know
feature
life
cycle
pipeline
where
they
eventually
become
fully
deterministic
for
disputes.
E
You
know
for
querying
and
for
indexing,
and
thus
supporting
disputes
and
arbitration
for
both.
So
one
one
question
that
might
come
up
here
is
that
okay?
Well,
how
do
I
find
out?
What
named
features
are
in
a
subgraph,
so
graph
node
is
going
to
be
the
the
source
of
truth
for
this.
For
the
time
being,
and
the
reason
for
that
is,
you
know
again,
because
the
protocol
is
developing
extremely
rapidly.
E
You
know
the
network
was
just
launched,
you
know
less
than
a
year
ago
there
isn't
any
implementation
agnostic
specification
of
subgraph
api
behavior.
Yet
that
is
something
that's
on
the
roadmap,
so
that
you
know
you
could
have
a
multi-client.
E
E
But
it
also
means
that
that
data
is
going
to
be
as
opposed
to
implicitly
defined.
Just
sort
of
by
the
subgraph
behavior
it'll
be
explicitly
defined
in
the
subgraph
manifest,
which
means
that
it
can
be
indexed
by
the
graph
and
shown
in
network
explorers
that
are,
you
know,
helping
dap
developers
discover
useful
subgraphs
to
build
on
and
helping
indexers
discover
useful
subgraphs
to
index
without
ever
having
to
actually
load
that
subgraph
first
into
graph
node
to
understand
what
its
capabilities
are.
E
The
other
thing
that
comes
with
using
graph
node
as
a
reference
implementation
is
that
we
need
to
start
versioning
it
using
december,
so
that's
semantic
versioning
for
for
the
non-developers
on
the
on
the
call.
E
This
isn't
something
we've
been
doing
super
regularly,
thus
far
graph
node's,
not
yet
at
a
1.0
release,
but
the
goal
is
that
between
minor
versions
of
graph
node,
so
for
those
just
a
quick
review
december
kind
of
breaks,
things
down
into
like
numbers
separated
by
two
decimals,
so
you'll
have
something
like
1.1.12
and
the
number
to
the
far
left
is
the
major
version.
E
The
number
in
the
middle
is
the
minor
version,
and
the
number
on
the
far
right
is
like
a
patch
release,
and
the
idea
is
that
between
minor
versions
of
graph
node
functionality
should
be
backwards
compatible
and
between
major
versions
of
graph
node.
You
know
breaking
changes
can
be
made
to
the
functionality
of
the
subgraph
api
behavior.
E
What
that
lets
us
do
is
it
actually
lets
us
use
major
versions
of
graph
node
as
the
canonical
version
of
subgraph
api
behavior
in
the
network,
because
the
expectation
is
that
between
versions
of
graph,
node,
major
versions
of
graph
node
until
you
get
to
a
new
major
version
of
the
functionality,
should
be
stable,
so
the
council
can
actually
vote
and
define
you
know
hey
this
version
of
graph
node
is
the
the
current
official
version
in
the
protocol
and
the
orbit
arbitrator
and
other
indexers
would
all
be
expected
to
abide
by
the
canonical
behavior,
that's
defined
by
the
graph
council.
E
You
know
via
this
this
graph
node
version,
while
we're
here
I
mentioned
before
experimental
features.
Experimental
features
might
have
breaking
changes
between
minor
versions
of
graph
node.
The
reason
for
that
is
oftentimes
when
you
develop
a
feature
for
the
first
time
you
want
to
get
feedback
on
the
api
before
that
api
becomes
sort
of
ossified.
You
know
in
the
in
the
library,
and
so
this
gives
us
kind
of
the
flexibility
for
having
features
that
are
experimental
like
for
example,
right
now
we
have
a
feature
called
non-fatal
errors
and
subgraphs.
E
That
is
experimental.
It
actually
probably
is
deterministic.
We
haven't
done
the
full
validation
of
that,
but
it
doesn't
at
this
point
in
the
in
the
life
cycle.
It
doesn't
matter
because
the
api
can't
be
considered
stable,
even
across
minor
versions.
So
so
practically
you
can
think
of
it
as
non-deterministic.
Even
if
in
theory
we
could
implement
the
feature
as
deterministic,
because
the
apis
haven't
solidified.
E
Yet
so
that's
where
those
experimental
features
come
into
play
as
part
of
this
kind
of
december
versioning
strategy
that
matrix
that
I
showed
you
a
second
ago
would
also
be
defined
by
the
graph
council.
So
at
the
time
that
the
graph
council,
this
could
be
in
separate
governance
actions.
So
recall
we
looked
at
the
governance
actions.
E
You
know
here
in
snapshot,
but
the
council
would
basically
be
on
the
hook
for
defining
both
the
canonical
major
version
of
graph
node,
that's
defining
the
protocol
functionality,
but
also
defining
the
feature,
support
matrix
for
those
features
that
are
included.
Those
named
features
that
are
included
in
that
version
of
graph
node,
so
I'll
pause
there.
You
know
there's,
obviously
a
lot
of
content
here
I
can't
get
through
I'm
not
going
to
go
into
all
the
details,
but
I
highly
encourage
you
to
check
this
out
in
the
forums.
E
E
As
you
know,
the
the
gip
that
leo
and
auriel
have
presented
previously
one
final
note
and
then
I'm
going
to
kick
it
off
to
to
ford
and
rel,
but
because
this
is
kind
of
a
large
bundle
of
changes
and
most
of
them
aren't
contract
changes.
A
lot
of
these
are
kind
of
conventions
and
ecosystem
processes.
E
We
do
think
this
is
a
really
good
place
to
get
community
feedback
in
the
interest
of
not
creating
a
lot
of
overhead.
Logistically
we'll
probably
create
a
single
community
snapshot
poll
for
the
arbitration
charter
as
well
as
all
the
gips.
It
depends
on
and
so
that'll
be
a
good
way
for
us
to
get
a
pulse
on.
You
know.
Do
people
understand
you
know
what
we're
talking
about.
Do
people
agree
with
what
we're
talking
about.
E
Have
people
have
sufficient
time
to
understand
the
conventions
you
know
that
are
being
advocated
for
here
so
stay
tuned?
For
that
you
know
be
aware
that
some
of
the
gips,
you
know
that
it
depends
on
are
still
kind
of
at
a
proposal
stage
like
I
think,
there's
still
some
to
do.
Sections
in
the
in
the
versioning
gip,
for
example.
I
think
there's
a
so.
This
is
a
way
of
getting
sentiment
on.
What's
on
the
gips
today
it
doesn't
mean
the
gips,
as
is,
are
going
to
be
the
final
thing
that
becomes.
E
You
know
the
official
behavior
of
the
protocol.
It's
just
kind
of
a
way
of
us
gauging
you
know
where
people
stand
on
the
proposals
in
their
current
state.
E
So
with
that,
I'm
going
to
kick
it
off
to
ariel
and
ford,
because
we
actually
have
the
first
round
of
disputes
going
on
in
the
protocol.
Currently,
if
you
guys
are
curious,
you
can
actually
find
them.
If
you
go
to
forum.thegraph.com
governance
and
gips,
you
can
go
to
arbitration
and
you
can
actually
follow
the
discussion
there
and
but
I'll.
Let
ford
and
arielle
kind
of
give
you
context
on
on
how
those
conversations
are
going.
F
Yeah,
well,
you
can
see
in
the
forum
that
there
were
some
disputes
created
and
the
the
arbitration
team
created
like
a
request
for
information.
There
was
like
yeah.
Thank
you.
Thank
you,
brandon,
the
the
the
format
for
requesting
the
information
is
like
there's,
there's
so
sort
of
a
template
with
some
base.
F
Information
that
is,
that
is
easy
to
make,
is
easier.
The
process
of
reviewing
if
the
dispute
is
right
or
wrong
and
in
this
case
is
the
yeah,
the
the
request
number
one
for
p2p
and
after
receiving
that
information
ford
has
been
looking
into
the
actual
the
pois
and
trying
to
dig
into
the
into
the
issue,
but
he
can
share
more
information
about
that.
F
E
Martin,
do
you
need
to
enable
talking
for.
G
Right
there
we
go
now,
I'm
a
co-host,
hey
everyone,
so,
as
ariel
said,
we're
going
through
the
first
process
of
the
execution
of
the
arbitration
charter
and
the
first
one
we're
going
into
detail
on
is:
is
there
were
seven
active
disputes
against
the
p2p
indexer?
G
So
as
this
is
our
first
time
going
through
this
process,
we're
making
sure
to
be
transparent
and
really
lay
out
a
good
process
for
the
future?
So
as
you
can,
if
you
go
to
the
forum,
you
can
see
the
information
I've
asked
for
from
p2p,
which
includes
the
proof
of
indexing.
G
E
That
cool
yeah,
so
so
one
thing
I
just
want
to
you
know
reiterate
here
is
that
one
of
the
things
that
the
arbitration
charter
for
those
that
weren't
on
the
last
town
hall
specifies
is
that,
especially
during
the
early
bootstrapping
days
of
the
network,
indexers
should
not
be
punished,
for
you
know,
faults
that
occur
due
to
like
software
malfunctions
or
gaps
in
the
the
tooling,
and
so
you
know,
part
of
this
investigation
is
really
aimed
at
you
know,
root
causing.
You
know.
E
Why
did
this
it's
less
about
punishment,
it's
more
about
just
root
causing
like
why
do
these
determinism?
You
know,
inconsistencies,
occur,
and
so
part
of
that
is,
you
know
and
keep
in
mind.
These
things
can
can
sneak
in
in
a
lot
of
different
places.
Right
so
like
there
could
be
determinism
bugs
in
graph
node,
but
there
could
also
be
determinism
bugs
upstream
right,
like
in
ethereum
node
providers
or
your
devops
configuration
or
you
could
have
you
know
there
could
be.
E
You
know,
networking
issues
you
have
that's
causing
you
to
miss
certain
messages
that
you'd
expect
to
see.
So
like
it's
pretty,
it's
pretty
subtle.
You
know
investigating
and
root
causing
these
things.
Ford
and
auriel
have
been
doing
a
really
great
job
we've
also
at
edge
and
node
we've
been
tracking
some
additional
tooling.
We
want
to
build
for
indexers
to
help
them
build
more
confidence
in
the
pois
that
they're
submitting.
E
Specifically
because
you
know
you
don't
want
to
find
out
that
you
had
a
determinism
bug
after
you've,
submitted
a
proof
of
indexing
and
are
now
on
the
hook
for
potentially
being
slashed
right,
even
though
the
arbitrator
will
exercise
discretion
and
try
and
make
sure
that
people
aren't
erroneously
slashed.
It's
still,
you
know,
probably
a
level
of
stress
and
risk
that
you'd
rather
not
take
on.
E
If
you
could
have
more
confidence
in
your
and
your
graph
node
your
devops
configuration
and
the
functionality,
that's,
you
know
maybe
been
released
in
newer
versions
of
graph
node,
and
so
that's
something
that
we're
also
highly
focused
on,
but
I
think
we're
building
a
lot
of
really
important
muscle
kind
of
going
through
this.
The
first
time-
and
I
think
it's
really
timely.
As
you
know,
the
community
kind
of
discusses
the
arbitration
charter
that
we've
actually
been
able
to
incorporate
some
of
the
learnings
from
going
through
this
process
back
into
the
arbitration
charter.
E
E
I
don't
think
we'll
specifically
be
discussing
the
charter,
but
if
you
have
questions
about
it
and
you
want
to
go
deeper
like
feel
free
to,
you
know,
bring
it
up
there
as
well
with
you
know,
other
indexers
on
the
call
that
have
been
you
know,
thinking
about
this
stuff,
a
lot
so
I'll
pause
there
and
I
think
the
rest
of
the
time,
if
I'm
not
mistaken,
we
just
have
set
aside
for
for
general
q.
E
E
E
But
yeah
folks
need
more
time
to
absorb
this
stuff
like
please
feel
free
to
just
hop
in
you
know,
in
the
forums
you
can
read
these
kind
of
gips
and
their
their
full
content,
and
you
know
and
ask
more
thoughtful
questions
there
once
you've
had
time
to
digest
some
of
this
stuff.