►
From YouTube: The Graph’s Town Hall #2 April 6th, 2021
Description
The Graph’s Community second Graph Protocol Townhall
This video was recorded: Tuesday, April 6 @ 8am PST, 2021.
A
So
just
wanted
to
thank
everyone
for
joining
us
for
our
second
protocol
town
hall.
It's
been
a
busy
few
months
as
we
ramp
up
on
the
foundation
side
and
also
ramp
up
our
governance
processes.
So
thank
you
for
joining
us
just
going
over
the
agenda.
For
today
we
do
have
a
packed
meeting
so
we'll
first
start
off
with
some
protocol
updates
get
an
overview
of
gip
two
and
three,
and
then
we
have
a
few
people
presenting
on
potential
future
updates.
A
Then
we'll
have
you
and
eve,
go
over
network
migration
roadmap
and
what's
expected
over
the
next
few
months,
and
then
we'll
give
an
overview
of
the
foundation
and
do
a
bit
of
a
show
and
tell
with
some
of
the
grantees
at
the
end
and
hopefully
leave
some
time
for
q.
A
that
sounds
good
I'll,
pass
it
off
to
brandon
who
will
do
an
overview
of
gip,
2
and
3.
B
Thanks,
eva
yeah,
I'll
try
and
keep
this
quick
since
there's
a
lot
of
agenda
to
cover
today,
but
the
quick
update
on
gip2
is
that
it's
been
updated
or
it's
been.
The
upgrade
has
gone
to
mainnet
since
the
last
time
we
spoke
here.
B
So
that's
the
graphs
first
protocol
upgrade
via
decentralized
governance,
which
is
very
exciting.
It
was
a
lot
of
work
to
get
there,
as
you
guys
know
who
are
following
closely.
You
know
getting
the
graph
council
set
up
getting
a
lot
of
the
foundation
operations
set
up
the
snapshot
community
polls,
the
gip
process
on
radical
and
so
on,
and
I
think
there's
also
a
lot
of
really
good
conversation
around
gip2.
That
kind
of
is
already
establishing
like
the
norms
that
are
gonna.
B
B
The
council
met
yesterday
and
kind
of
did
a
little
bit
of
a
postmortem
on
like
gip2
and
how
that
process
went
so
there'll,
be
some
notes
released
on
that
later
in
the
week.
Some
high-level
thoughts
that
I
had
were
you
know
we're
constantly
thinking
about
like
what
the
principles
are
that
are
going
to
govern
the
network
and,
like
obviously
like
each
of
us,
as
community
members,
contributing
to
like
what
those
what
those
principles
could
be,
and
you
know
in
the
case
of
gip2.
B
I
think
a
lot
of
folks
brought
up
points
related
to
maybe
something
I
had
written
about
before,
which
was
like
avoiding
zero-sum
games,
and
they
kind
of
saw
like
hey
jip2
seems
to
like
you
know,
benefit
one
stakeholder
group.
You
know
a
little
bit
more
than
it
benefits
another,
but
I
think
that
the
key
part
about
gip2
for
me
was
that
it
also
was
incredibly
important
to
the
network
as
a
whole
right
as
the
network's
going
into
this.
B
You
know
new
migration
phase,
that's
going
to
be
absolutely
critical
for
the
you
know
for
the
success
of
the
network,
and
so
I
don't
think
on
any
single
proposal.
We're
always
going
to
like
have
this
perfect
scorecard,
where,
like
every
single
thing,
hits
every
stakeholder
group
evenly.
You
know,
I
think
sometimes
you'll
do
something
for
the
benefit
of
the
network.
As
a
whole-
and
you
know-
maybe
it
benefits
one
stakeholder
group
a
little
bit
more-
maybe
it
benefits
another,
but
I
think
holistically
as
we
look
at
the
proposals.
B
You
know
that
that
we'll
be
researching
on
our
end.
I
think
you
know
you'll
kind
of
get
towards.
I
think
you'll
you'll
get
a
little
bit
more
of
that
kind
of
evenness
spread
out
across
the
stakeholder
group,
and
you
know
the
example
of
that
is
we'll
be
switching
some
of
our
research
attention
to
issues
that
delegators
have
brought
up.
You
know
in
their
interactions
with
the
delegation
market
and
I
think
those
are
likely
to
again
benefit
the
protocol
as
a
whole,
but
maybe
lean
towards
the
delegators
a
little
bit
more.
B
Some
other
takeaways
were
just
that.
I
think
it's
important
to
reiterate
and
like
the
council
kind
of
agreed
with
this,
that
you
know
the
snapshot.
Community
polls
are
just
a
poll.
It's
important
for
that
expectation
to
be
set
up
front
so
that
we
have
you
know
so
either
the
council
or
the
foundation
has
the
flexibility
to
pull
the
community
on
on
ideas
that
are
still
like
work
in
progress.
B
So
if
you
look
at
like
the
gip
one
language,
you
know
it
basically
shows
snapshot.
Community
polls
happening
throughout
the
gip
process
like
starting
at
like
early
draft
all
the
way
up
to
like
candidate
stage.
You
know
where
there's
a
candidate
implementation,
and
so
it's
just
important.
You
know
it's
totally
fine-
that
some
protocols
have
this
like
direct
token
governance
and
that
expectation
is
set
up
front.
But
I
think
the
important
thing
here
is
just
expectations
that
you
know
right
now.
B
The
way
snapshot
community
polling
is
set
up
is
that
it's
meant
to
be
an
information
signal,
but
not
this
like
direct,
it's
not
like
a
referendum
right
and
so
and
then
the
last
thing
is,
you
know,
I
think
it
was
frustrating
for
a
lot
of
people
in
this
process,
including
ourselves
to
be
moving
so
slow.
B
You
know
we're
used
to
moving
in
a
very
agile
environment
and
I
think
a
big
part
of
that
this
time
was
audits,
seemed
to
be
the
like
the
constant
bottleneck,
so
the
updates
on
average
starting
next
month
will
have
a
retainer
set
up
with
this
is
through
the
foundation.
It
will
have
a
retainer
set
up
with
consensus
diligence
and
then
there
will
be
a
retainer
starting
with
open
zeppelin.
B
I
believe
the
month
after
and
so
from
you
know
kind
of
that
point
forward.
There
will
be
a
little
bit
more
of
a
regular
cadence
of
audits
that
are
just
kind
of
booked
for
the
you
know,
books
for
the
graphs
exclusive
you
know
use,
and
so
we
hope
that'll
make.
You
know
a
big
impact
in
you
know.
Keeping
these
things
moving
a
little
bit
smoother
in
the
future.
B
Last
update,
I
have
is
more
of
just
a
housekeeping
note-
is
that
the
audits
for
gip
three
have
started
this
week,
and
so
that's
an
update
that
you
can
expect
in
the
near
future
and
and
there's
a
number
of
other
proposals
that
you'll
be
hearing
from
folks
on
the
call
that
are
related
to
the
subgraph
migration.
So
that's
another
thing
to
keep
an
eye
out
for
in
the
forums
and
in
the
snapshot,
community
polls
and
with
that
I'll
yield.
My
time.
A
Awesome,
thank
you
so
much
brandon.
So
next
up
we
had
a
few
people
who
want
to
discuss
updates
brandon.
Did
you
want
to
start
off
on
that
one
as
well.
B
I
can
introduce
the
ones
at
a
high
level
if
I,
if
I
know
what
you're
referring
to
so,
we
have
two
that
are
related
to
the
subgraph
migration,
so
one
of
them
is
just
related
to
making
sure
that
the
subgraph
indexing
process
is
fully
deterministic.
You
know
for
the
purpose
of
arbitration,
and
so
we
have
right
now
we
have
something
in
the
subgraphs.
That's
a
timeout!
That's
actually
time
based
not
gas
based
and
that's
so
that's
subjective
between
indexers.
B
So
we
have
a
proposal
that
zach
will
be
talking
about
to
address
that,
and
then
we
also
as
part
of
our
state
channel
solution
that
we're
rolling
out.
We
have
a
a
parameter,
update,
that's
happening
in
the
protocol
and
ariel
will
be
talking
about
that
and
I'll
just
hand
it
off
to
them.
C
Yeah
hello,
one
of
the
changes
we
need
to
to
do
in
the
us.
A
parameter
update
in
the
protocol
is
to
allow
something
that
we
call
an
asset
holder
to
send
funds
to
the
seeking
contract.
C
Currently,
there's
the
source
of
funds
being
collected
from
state
channels
is
something
called
the
gateway
and
by
using
the
new
vector
state
channels,
what
we
need
is
to
use
a
contract
called
a
withdrawal
helper
that
is
going
to
connect,
I
would
say
multi-six
that
is
holding
the
the
collateral
to
the
protocol.
So
we
need
to
set
this
parameter
into
the
into
the
contracts
and
second
contract.
B
Yeah,
so
the
state
channel
solution
is
like
kind
of
extra
protocol
right.
This
is
like
this
separate
vector
thing
that
connects
has
been
working
on
and
it's
audited
audited
separately
as
part
of
that
project.
This
is
kind
of
just
like
an
allow
list
to
allow
the
integration
of
those
payments
to
be
settled
into
our
protocol
and
so
ariel's
working
on
a
gip.
Now
I
don't
believe
it's
in
the
repo,
but
I
think
it
should
be
up
later
this
week
for
folks
to
review.
C
A
Awesome
we
can
move
on.
I
guess
to
zach
we'll
be
talking
about
wasm
update,
okay,.
D
For
example,
imagine
a
subgraph
where
the
code
is
just
an
infinite
loop.
If
you
could
trick
an
indexer
into
indexing,
just
a
few
infinite
loops
that
would
steal
a
significant
amount
of
resources
from
the
indexer
that
are
meant
to
index
real
value.
Add
subgraphs,
so
curation
deals
with
this
to
some
extent,
but
the
problem
is
severe
enough
that
an
indexer
should
take
this
into
their
own
hands
as
well
and
not
really
rely
solely
on
curation
for
their
security.
D
So
there's
an
existing
mechanism
in
graph
node
to
deal
with
this,
which
is
a
timeout.
So
after
a
configurable
amount
of
time
spent
in
handlers,
graphnode
will
cease
indexing
a
subgraph
and
transition
it
into
an
error
state.
The
problem
with
timeouts
is
that
they
are
not
deterministic,
so
two
indexers
running
with
different
hardware
or
timeout
configurations
may
reasonably
disagree
as
to
whether
a
sub
graph
did
in
fact
time
out.
D
D
So
if
one
indexer
is
saying
that
the
subgraph
timed
out
and
the
other,
not
then
the
attestations
disagree,
there
can
be
a
dispute
and
in
the
worst
case,
slashing
nobody
wants
that
instead,
what's
implemented
today,
is
that
the
indexer
that
timed
out
does
not
provide
attestations
or
proof
of
indexing
at
all,
so
the
indexer
that
progressed
beyond
the
timeout,
however,
can
still
serve
queries
and
proofs
of
indexing,
and
this
seems
reasonable,
but
it
sets
the
stage
for
a
rather
insidious
economic
attack.
D
So
let's
say
that
you
have
a
malicious
indexer
and
they
want
to
exclaim
or
sorry.
They
want
to
claim
exclusive
indexing
rewards
that
only
they
can
collect.
They
may
develop
a
subgraph
that
takes
effectively
forever
to
compute.
D
So
maybe
it's
not
straightforward,
like
an
infinite
loop,
but
it's
something
almost
as
bad,
like
brute
forcing
a
winning
strategy
to
the
game
of
chess
in
a
sub
graph
handler
and
then
saving
the
winning
strategy
in
an
entity.
This
would
take
much
more
compute
than
is
possible
and
we
can
clearly
know
that
no
indexer
is
able
to
index
that
subgraph
and
all
honest
indexers
would
time
out.
D
D
Nor
can
you
provide
an
attestation
for
a
dispute,
so
even
worse
is
that
showing
that
the
correct
value
is
uncomputable
is
equivalent
to
solving
the
halting
problem,
which
is
a
famous
unsolvable
problem
in
computer
science,
and
so
in.
In
this
scenario,
the
indexer
claims
the
rewards,
but
does
not
give
any
utility
back
to
the
community.
D
So
that's
a
very
long-winded,
but
hopefully
educational,
introduction
to
a
complex
problem
with
a
straightforward
and
easy
solution.
So
what
this
gip
is
going
to
propose
is
that
specific
costs
be
assigned
to
operations
that
a
subgraph
can
perform.
These
costs
are
called
gas,
you're,
probably
familiar
with
the
concept
from
ethereum.
D
D
As
for
whether
or
not
implementing
some
kind
of
gas
is
a
good
idea,
this
is
practically
a
given.
It's
almost
tautological
that
gas
is
the
right
answer
to
this
problem,
so
I
would
expect
that
part
of
the
gip
to
go
forward
without
much
controversy,
but
within
those
bounds,
there's
a
lot
of
design
space
and
many
important
questions
for
the
community
to
answer
so
one
of
the
first
questions
is
over:
what
period
is
gas
counted?
Does
it
count
per
block?
Does
unused
gas
accumulate
over
time?
When
is
it
reset
to
zero
a
kind
of
thing?
D
So
if
the
gas
was
counted
per
block,
whether
or
not
the
subgraph
would
fail
would
depend
on
how
many
handlers
ran
in
that
block,
which
would
be
dependent
on
the
popularity
of
the
smart
contracts
and
the
changing
block
gas
limit
and
other
factors
right,
but
if,
instead
the
gas
is
counted
per
handler,
it
becomes
much
easier
for
the
subgraph
author
to
reason
about.
So
if
they
can
show
within
the
limited
scope
of
the
handler
that
it
never
exceeds
a
gas
limit,
then
they
can
have
assurance
that
the
subgraph
should
continue
to
work
forever.
D
Another
question
that
the
gip
is
going
to
talk
about
is
what
the
gas
limit
should
be
set
to
the
one
option
there
would
be
to
have
the
gas
be
configurable
right.
A
subgraph
developer
could
finely
tune
the
gas
in
a
subgraph
manifest
according
to
the
needs
of
that
subgraph.
This
approach
has
some
benefits.
First,
configurable
gas
means
supporting
a
wider
range
of
possible
subgraphs
than
might
be
anticipated
today.
D
Furthermore,
the
gas
limit
specified
by
the
subgraph
developer
could
provide
an
important
signal
to
indexers
about
the
cost
of
indexing.
A
second
graph
indexers
may
then
use
this
information
when
deciding
whether
or
not
to
index
a
subgraph,
or
even
maybe
what
hardware
instance
that
would
be
best
to
index
that
subgraph
on
this
is,
unfortunately,
not
the
design
that
will
be
put
forward
in
the
proposal.
D
Instead,
the
proposal
specifies
that
the
gas
will
be
set
to
reasonable
protocol
protocol
wide
defaults.
The
reason
for
this
is
that
it's
better
to
have
a
reasonable
gas
limit
sooner
and
add
configurable
gas
later
in
a
in
a
later
gip,
so
migrating
toward
having
configuration
options
later
is
easy
because
we
can
use
the
current
defaults
if
no
gas
limits
are
specified
in
the
manifest
adding
a
configuration
may
seem
like
a
small
task,
but
there's
actually
a
lot
of
knobs
and
questions
about
what
the
right
way
to
do
configuration
would
be.
D
There
would
be
questions
about
whether
we
want
to
track
gas
separately
per
resource
like
whether
saving
an
entity
or
doing
an
ethereum
call
or
wasm
compute
would
be
tracked
separately,
maybe
or
the
questions
about
the
naming
and
grouping
of
options
inside
the
yaml
file,
and
it's
it's
my
expectation
that
these
and
other
questions
would
generate
a
significant
amount
of
discussion,
but
may
not
be
the
best
area
of
focus
for
right
now.
So
that's
why
we
go
with
sort
of
reasonable
default
in
this
gip.
D
The
answer
to
then
about
what
what
to
set
the
gas
limit
to
right
now,
like
what
is
the
default
configuration
going
to
be,
is
just
really
high.
The
numbers
need
to
be
high
enough
to
ensure
that
no
major,
pre-existing
subgraphs
fail
for
gas
reasons
when
migrating
them
to
mainnet.
So
this
answer
kind
of
works
well
with
our
original
motivations
for
adding
gas,
which
the
goal
here
wasn't
to
drive
down
the
cost
of
indexing
or
to
make
indexing
costs
fair
across
subgraphs
like
it
might
be
with
ethereum.
D
D
So
it's
still
in
the
index
or
discretion
to
stop
indexing
at
any
time.
For
example,
if
there's
a
timeout
or
if
they
deem
that
the
indexing
is
is
too
expensive
for
whatever
reason,
under
those
circumstances
where
the
indexer
and
not
the
protocol
decides
to
stop
indexing,
then
the
indexer
would
not
serve
requests
or
collect
a
proof
of
indexing
for
blocks
that
they
had
not
yet
indexed.
D
E
Great
and
just
so
one
point
for
clarification
is
that
this
isn't
proposing
adding
additional
cost
to
the
indexing,
so
this
isn't
like
gas
that
users
are
paying
for
like
they
do
in
ethereum.
It's
specifically
to
find
a
gas
limit
which
you
know
stops.
You
know,
network
level,
indexing
of
that
sub
graph
to
get
these
kind
of
pathological
cases
and
and
the
the
costing
side
for
the
indexing
is
meant
to
be
still
recovered
from
query
fees.
So
that's
that's!
You
know.
A
F
F
Well,
we
share
a
few
improvement
ideas.
A
few
weeks
ago
on
the
forum,
it's
been
two
years
more
than
two
years
since
our
team
started
working
on
the
graph.
Actually
I
was
the
first
member
of
that
team,
so
I
I've
been
dealing
with
this
tooling
for
the
very
from
the
very
beginning,
so
I'm
the
I'm
one
of
the
early
stages
users.
F
So
it
has
improved
a
lot,
but
we
believe
that
we
can
even
improve
that
to
be
more
powerful
and
adding
more
features,
and
we
mainly
focus
on
improvement
that
we
can
make
just
by
changing
the
cli
code
base
and
only
the
cli
code
base.
That
means
no
change
is
required
for
the
graph
node,
not
touching
anything
else
than
the
cli,
so
I'm
gonna
go
to
the
list.
The
main
one
is
upgrading
the
assembly
version
we
we
use
on
on
the
cli.
F
The
current
version
is
a
version
that
is
expected
to
a
really
older
version
of
the
assembly
script.
That
means
a
version
from
two
years
ago,
for
example,
and
with
the
version
you
can
even
convert
use,
I
mean
convert
a
stream
to
lower
case.
You
have
to
operate
on
bits,
but
on
new
version,
for
example,
you
can
use
a
complete
standard
library
stream
implementation.
F
So
we
believe
that
this,
this
upgrading
will
provide
a
really
better
user
experience
for
developers
for
directors
to
graph
developer,
that
that
is
our
first
proposal
to
upgrading
the
assembly
script.
F
There
are
a
lot
of
improvement
on
the
standard
library,
more
implementation
for
for
the
user,
so
it's
all
winning
and
the
second.
The
second
proposal
is
also
extending
the
cli
to
support
importing
a
schema
from
another
package.
I
mean,
for
example,
importing
a
schema
from
another
libraries
like
this
on
the
schema
file
you
we
will
be
able
to
import
a
schema
defined
on
another
package.
F
F
The
the
other
ones
are
more
less
important,
but,
for
example,
right
now
we
we
have
some
kind
of
templating
workaround
to
deploy
the
sensor
graph
to
different
networks,
and
we
believe
that,
like
this,
for
example,
you
will
use
an
external
library
external
template
engine
to
to
render
the
differential
graph
manifest,
but
we
believe
that
this
kind
of
feature
should
be
part
of
the
of
the
cli
and
well.
We
propose
many
others
and
many
other
improvements,
small
improvement,
but
we
know
that
they
are
here
around
many
others.
F
Developer
experience,
developers,
photograph
so
feel
free
to
take
a
look
to
this
post,
it's
in
the
forum
and
provide
feedback.
If
you
want
feedback,
is
all
well
welcome.
So
thank
you.
That's
it.
A
Awesome,
thank
you
so
much
sebastian
and
last
but
not
least,
we
have
ariel
to
talk
again
about
a
few
updates
he's
working
on.
C
Okay,
I
will
share
my
screen
a
bit
yep,
so
I
want
to
talk
about.
One
of
the
updates
is
related
to
a
fix
of
how
we
initialize
delegation
parameters
in
the
sticking
contract.
On
a
particular
case.
This
was
reported
through
a
bounty,
and
we
already
have
a
fix
that
it's
in
this
pr.
C
You
can
read
because
there's
a
like
a
description
of
the
of
the
issue,
but
I
want
to
describe
it
basically,
the
the
the
issue
is
when,
when
you
stake
for
the
first
time
as
an
indexer,
what
you
get
is
we
initialize
the
delegation
parameters?
We
set
it
to
the
to
be
most
beneficial
to
the
indexer,
like
a
hundred
percent
indexer
cut.
The
issue
is
that
when
you
use
stake,
two,
that
is
a
function
on
the
protocol
that
we
included
for
a
third
party
to
stake
on
on
a
particular
indexer
address.
C
This
function
is
using
segregation
parameters
and
set
delegation
parameters
using
the
message
sender.
So
the
result
is
that
we
are
not
getting
the
right
initialization
of
the
relation
parameters
in
those
cases.
So
this
this
never
happened
in
the
protocol.
Stick
two
is
not
very
like
used
now
and
it's
easily
fixed
by
an
indexer,
because
you
can
use
state
two
and
that's.
It
then
set
deletion
parameters.
C
A
Perfect,
so
if
that's
it
for
protocol
updates,
we'll
move
on
now
to
network
migration
and
I'll
hand
it
off
to
you
and
eve.
E
Hey
everyone
so
yeah.
It
is
time
for
subgraph
migration.
I
think
this
is
the
thing
that
everybody's
excited
about.
It's
been
a
long
time
in
the
making
we've
got
about
a
year
and
a
half's
worth
of
work
that
we're
going
to
be
releasing.
You
know
with
you,
know
this
next
set
of
releases
and
we
published
a
post
a
little
over
a
week
ago.
E
I'm
kind
of
discussing
the
process
so
I'll,
just
kind
of
recap:
it
here
so
everybody's
on
the
same
page,
so
the
the
subgraph
migration
is
going
to
take
place
in
three
phases.
The
first
phase
is
the
migration
bootstrapping
phase
and
the
idea
is,
you
know
there
are
a
lot
of
participants.
You
know
indexers
delegators
developers,
and
you
know
we.
We
don't
want
this
to
be
too
chaotic
of
a
process.
E
So
we're
going
to
you
know,
help
kind
of
coordinate
to
to
get
the
first
set
of
subgraphs
published
on
the
network
at
around
the
same
time,
and
and
so
there
will
be
an
initial
set
of
subgraphs
and
those
subgraphs
will
be
what
we
call
like
over
signaled.
E
So
the
idea
here
is
in
the
curation
market
as
more
and
more
sub
graphs
get
deployed
to
the
network.
The
signal
becomes
a
really
important
mechanism
for
knowing
which
subgraphs
to
index
and
for
developers
to
know
which
subgraphs
to
use,
but
in
the
early
days
when
there's
just
a
handful
of
subgraphs,
the
the
signal
doesn't
actually
provide.
You
know
much
additional
value
there
and
there's
actually
like
more
risks
of
things
becoming
noisy
in
in
the
early
days
and
so
to
just
kind
of
like
remove
all
of
that
noise.
E
The
subgraphs
will
initially
be
over
signaled,
which
basically
makes
it
so
that
there's
no
real
incentive
for
curation
to
happen
at
that
point.
But
at
the
start
of
the
first
phase,
indexers
will
be
able
to
start
indexing
those
sub
graphs
and
and
then
start
serving
queries
in
the
network
against
those
subgraphs.
E
So
this
is
going
to
be
something
where
edge
and
node
works.
You
know
closely
with
the
foundation
and
with
you
know
this
initial
set
of
dapps
to
do
this.
This
first
wave
and
once
the
sub
graphs
are
synced,
which
you
know
for
some
sub
graphs
you
know,
could
take
up
to
a
few
weeks.
Some
might
just
be
a
few
hours.
E
The
dab
teams
will
be
able
to
start
testing
those
subgraphs
using
an
endpoint
that
uses
the
state
channel's
implementation
to
do
micropayments
per
query,
and
once
those
teams
go
through
their
internal
qa
processes,
they'll
be
able
to
move
on
to
the
second
phase,
which
is
production
dapps,
and
this
is
when
they
take
their
front
end
and
they
just
point
to
the
new
endpoints
that
get
routed
to
the
decentralized
network
instead
of
the
hosted
service.
E
You
know
we're
really
excited
to
kind
of
see
the
performance
of
having
you
know
geographically,
distributed
set
of
gateways
all
over
the
world
routing
to
a
large,
diverse
set
of
indexers.
That
are,
you
know,
optimized
for
these
different
subgraphs,
and
it
should
be
a
really
exciting
time
to
kind
of
see
those.
You
know.
E
First
production
dapps
on
the
network
and
then
we
move
to
phase
three
which
is
curation
live
and
that's
where
you
know
we'll
be
launching
a
set
of
products
that
make
it
really
easy
for
anybody
to
publish
their
subgraphs
to
the
decentralized
network,
pay
for
query
fees
in
grt
and
also
have
the
curation
market
ui.
So
the
curators
can
can
start
signaling
on
you
know
all
the
new
subgraphs,
the
the
the
general
kind
of
arrangement
there
for
these
kind
of
sets
of
products.
E
Is
it's
something
that
the
edunode
team
is
building?
You
know
we
have.
E
You
know,
design,
product
engineering,
expertise
and
it's
getting
kind
of
white
label
to
the
foundation
through
you
know
the
the
services
agreement
that's
in
place
there,
so
those
products
will
be
integrated
into
the
graph.com
and
that
creates
kind
of
like
a
seamless
experience
for
for
folks
that
go
to
the
graph
website
and
you
know,
can
start
browsing
the
the
explorer
and-
and
you
know,
get
started
kind
of
you
know
with
development
and
publishing
to
the
network.
E
So
all
of
that's
going
to
be
an
integrated
experience,
so
that's
kind
of
the
process,
and
you
know
we're
trying
to
balance
here.
You
know
getting
the
stuff
you
know
out.
You
know
as
soon
as
we
can,
but
also
in
an
orderly
way.
I
think
it
was
really
important
for
us
to
have
this
initial
kind
of
bootstrapping
period
on
the
network.
We're
able
to
go
through
the
initial
governance
process,
we're
able
to
really
see
everything
running
smoothly
on
mainnet
and
now
kind
of
do
this
transition.
E
So
you
know
really
excited
to
work
with
everybody
on
this.
If,
if
you're,
a
dap,
that's
running
on
the
hosted
service
right
now
feel
free
to
you
know,
reach
out
to
to
any
of
us
to,
you
know,
find
out
how
you
can
be
part
of
this.
You
know
early
migration
program
and
you
know
I
think
it's
something
that'll
be
really
exciting
for
folks
to
take
part
in.
A
Awesome
looking
forward
to
it-
and
you
know
we-
we've
had
so
many
curators
with
us
along
the
journey
since
the
test
and
curator
program.
So
we're
excited
to
have
you
guys
along
the
ride
for
the
next
few
months.
So
next
up,
we
wanted
to
provide
a
few
foundation
updates.
So
I'll
just
share
my
screen
quickly.
A
You
know
so
our
focus,
the
last
few
months
has
just
been
standing
up
the
foundation
itself
and
supporting
the
network.
Bootstrapping
we've
also
done
a
big
push
on
releasing
five
million
dollars
allocated
to
the
community,
so
this
is
across
grants,
foundation,
contributors
and
ecosystem
contributors
on
the
governance
side,
brandon
and
the
graph
council
have
been
doing
an
incredible
job
of
standing
up.
Some
of
our
processes.
A
We've
had
our
first
protocol
town
hall
last
month
and
we've
also
had
a
few
gips
passed
so
creating
that
cadence
of
using
the
forum
and
snapshot
to
really
create
a
fruitful
community
and
discussion
around
some
of
these
updates.
We've
also
announced
multi-blockchain
and
have
started
on
some
of
the
evm
based
chains,
so
finance
smart
chain,
phantom
fuse
are
a
few
of
the
ones
that
we've
already
released
and
we've
got
several
in
the
pipeline.
A
One
of
the
other
exciting
components
of
our
community
has
been
the
educational
rigor.
So
we've
had
four
universities
reach
out
to
us
to
help
develop
courses
on
either
how
to
build
a
subgraph
and
adapt
together
how
to
become
an
indexer
or
how
to
learn
more
about
the
future
of
work
and
contributing
to
open
economies.
So
we're
really
excited
to
hone
in
there
and
those
are
global.
So
you
know
you'll
hear
more
from
us
and
maybe
you
can
even
participate.
One
of
those
courses
in
the
future.
A
And
then
just
going
over,
some
of
the
grant
highlights
so
of
the
5
million.
Approximately
2.5
million
is
attributed
to
grants.
As
you
can
see,
protocol
was
one
of
our
main
focuses.
We've
got
a
few
heavy
hitters
on
the
you
know,
index
or
performance
side
in
several
automations
to
improve
the
ability
for
new
indexes
to
join
our
ecosystem.
A
On
the
tooling
side,
a
lot
of
it
is
focused
on
you
know,
monitoring,
tooling,
getting
more
information
to
indexers
and
delegators
about
the
network
and
also
improving
tooling
for
subgraphs.
So
one
of
the
exciting
ones
here
is
creating
a
python
module.
We've
also
got
a
subgraph
testing
framework
and
a
query,
chrome
extension
to
make
it
easier
for
developers
to
share
deep
links
of
queries
on
the
dapps
and
subgraph
side,
we're
across
the
board
so
excited
to
see
representation
in
nft
scaling
and
also
defy
one
of
the
ones
I'm
most
excited
about.
A
Is
this
decentralized
google
docs,
because
it's
one
of
the
first
apps
that's
actually
building
on
saya
saya
and
the
graph,
so
we
as
an
ecosystem
will
learn
a
lot
from
that
one
and
on
the
community
building
side?
To
be
frank,
we
were
overwhelmed
with
the
interest.
You
know
to
continue
growing
our
community,
so
you
can
see.
A
We've
got
quite
a
few
programs
either
in
the
educational
content,
podcast
hackathon
sponsorships
and
lots
of
regional
moderators
that
are
just
trying
to
build
up
the
community
and
educate
more
folks
about
the
graph
and
now
we'll
actually
have
a
few
of
our
grantees
do
some
show
and
tell
and
talk
about
their
grants.
G
All
right,
so
thank
you
very
much
for
giving
me
the
opportunity
today
to
present
a
little
bit
more
about
the
graph
academy.
So
I'm
going
to
share
my
screen
right
now
with
you.
G
Because
I've
prepared
a
small
presentation,
I
hope
you
can
see
it,
and
so
what
we
want
to
build
with
the
graph
academy
is
that
we
want
to
foster
a
knowledgeable
community
and
what
we
want
to
provide
is
a
community
outlet
for
community
members
to
participate
in
writing
documentation
about
the
graph
ecosystem.
G
So
the
problem
we
identified
in
the
graph
ecosystem
is
that
we
had
many
different
outlets
that
produced
content
documentation
about
the
graph
ecosystem.
But
there
was
no
single
go-to
resource
where
we
could
send
new
members
of
the
ecosystem
over
so
that
they
could
get
onboarded
with
the
graph
easily.
G
And
the
second
challenge
we
identified
was
that
onboarding
new
participants
in
the
graph
ecosystem
was
quite
a
challenge,
as
the
graph
ecosystem
is
quite
complicated
for
new
members,
especially
if
they
have
no
foundational
crypto
ex
crypto
currency
experiences.
G
So
what
we
want
to
provide
is
a
a
forum
or
a
community
for
these
new
participants
in
the
network
so
that
they
can
quickly
accelerate
the
knowledge
about
the
graph
ecosystem
and
with
the
graph
academy,
we
want
to
provide
this
single
go-to
resource,
where
everybody
can
go
to
from
developers
from
indexers
to
subgraph
developers,
delegators
and
curators
to
really
grow
their
knowledge
about
the
ecosystem,
and
there
was
an
interesting
discussion
over
at
the
graph
discord
that
I've
noticed-
and
this
was
this
member
who
was
a
delegator
to
an
indexer,
and
he
made
the
wrong
choice
and
he
delegated
to
an
indexer
that
he
did
not
receive
any
delegation
rewards
from
so
instead
of
blaming
himself
and
telling
himself
well,
I
did
not
do
the
required
due
diligence
that
I
had
to
do
about
the
indexer.
G
He
was
pretty
disappointed
about
the
entire
graph
ecosystem.
He
was
and,
as
he
stated
in
the
quota,
he
said
that
he
was
kind
of
done
with
the
graph.
So
this
conversation
made
me
realize
that
whenever
we
have
onboarding
problems
in
the
ecosystem,
it
will
always
have
a
negative
consequence
for
the
entire
ecosystem
and
for
the
brand.
G
So
this
is
why
it's
really
important
to
help
new
community
members
to
get
on
board
it
and
to
provide
this
resource
for
these
new
community
members,
and
this
is
why
we
have
introduced
the
graph
academy
and
we
want
to
build
a
free,
open
source
and
community
driven
knowledge
base
for
the
community
and
by
the
community.
G
And
what
we
are
doing
is
we
have
this
two-tiered
approach.
So,
on
the
one
hand,
we
have
the
graph
academy,
the
main
page,
where
we
provide
visually
appealing
guides
which
are
interactive,
step-by-step
tutorials,
and
then
we
also
have
the
content
over
at
github,
which
provides
technical
documentation
about
the
ecosystem.
G
So
I'm
going
to
show
you
in
a
second
how
this
all
works
and
obviously
for
everyone
who's
interested.
This
is
an
open
invitation.
Everybody
is
allowed
and
we
are
very
happy
if
people
join
the
movement
and
help
us
or
join
us
in
the
quest
to
become
the
go-to
resource
to
master
the
graph
and
to
further
reduce
the
barrier
to
building
the
infrastructure
for
web
3
applications.
G
G
This
is
the
main
page,
and
here
you
can
find
resources
about
the
ecosystem,
for
developers
for
indexers,
for
delegators
and
curators,
and
a
couple
of
video
guides
that
we
are
working
on
right
now
and,
for
example,
we
could
have
a
look
at
the
graph
delegator
knowledge
hub,
which
features
a
lot
of
tutorials
for
delegators
to
simply
help
delegators
in
getting
started
with
the
graph.
So
we
explain
to
them
what
the
graph
network
is.
G
What
a
delegator
is
the
kind
of
risks
they
are
confronted
with
and
should
take
into
consideration,
and
we
also
help
them
to
choose
indexers
and
from
there
on
they
can
grow
their
knowledge
about
the
ecosystem,
to
more
advanced
subjects
and
stuff
like
this.
So
I'm
also
going
to
show
you
the
documentation
page.
So
this
is,
for
example,
the
testnet
docker
compose
guide
by
payne.
G
So
thank
you
very
much
for
giving
me
the
time
to
present
today
and
I
will
post
right
now
in
the
chat,
a
link
for
everyone
who
is
interested
in
contributing
to
the
graph
academy,
and
I
would
love
to
see
you
over
there
in
our
discord
server.
So
thank
you
very
much.
H
H
H
Oh
there
you
go
okay,
great
yeah,
hi
everyone,
my
name
is
tommy,
I'm
the
core
dev
from
apy
vision
and
the
grant
that
we
got
from
the
graph
is
going
to
be
focused
on
is
used
to
be
focused
on
building
for
amms.
H
It
went
from
2
billion
to
50
billion
just
recently,
but
what
you
also
need
to
know
is
that
30
to
40
percent
of
that
tpl
is
actually
locked
up
in
decentralized
exchanges
and
amms,
and
the
current
problem
is
that
it
is
very
hard
for
someone
to
come
in
and
read
an
amm
and
what
happens
is
you
know
they
have
to
go
on
mute
swap
and
then
they
have
to
go
on
bouncer
and
the
idea
is
that
what
we
want
to
do
is
aggregate
all
of
these
amms
and
provide
one
consistent
subgraph
for
people
to
query
on
the
reason
why
we
are
interested
in
that
is
because
we've
also
run
a
tool
called
apy
vision
and
we
actually
help
liquidity
providers
basically
tease
out
their
gains
and
losses
based
on
their
pools.
H
And
so
we
wanted
to
create
a
subgraph
to
encompass
not
just
the
user
data,
but
also
more
historical
information,
so
that
people
other
developers
can
help
with
the
querying
and
also
build
on
top
of
different
amms.
And
so
that's
what
we're
going
to
be
doing.
And
as
you
can
see
here,
the
list
is
quite
long,
but
we
will
be
focusing
on
the
the
bigger
amms
first
as
a
as
the
first
cut,
so
balancer
sushi
swap
curve
one
inch
and
then
we'll
be
moving
down
the
list,
and
so
we'll
have
two
sub
graphs.
H
One
for
generalized
amm
data,
which
you
know
includes
price
history.
Volumes
reserves
date
day,
history
for
various
things.
But
then
we'll
also
have
a
subgraph
that
will
index
specific
user
data,
so
a
history
of
their
enters
and
exits.
H
What
price
they
entered
at
number
of
lp
tokens
that
they
acquired
also
the
gas
that
were
that
was
paid
and
the
fees
that
were
collected
as
being
part
of
an
lp.
So
there's
a
lot
of
work
here,
but
we've
actually
started
building
the
sub
graphs
for
our
own
use.
Actually,
so
one
of
them
was
the
it's
version
zero,
so
it's
still
kind
of
ongoing.
But
the
idea
is
that
we,
we
start
aggregating
a
lot
of
these
data
from
different
amms
so
that
we
can.
H
We
can
consume
that
data
on
the
client
side,
so
we're
building
it
for
we're
open
sourcing
it
for
everyone's
use,
and
we
hope
that
you
know
people
contribute
to
it
and
that
we
have
one
unified
subgraph.
Instead
of
you
know,
calling
10
different
places
at
the
moment.
I
Cool,
hey
everyone.
Thank
you
so
much
for
the
invitation
to
speak
more
about
that
today.
Let
me
share
my
screen.
I
So
we
look
at
kind
of
a
scope
of
work
for
the
grant.
The
first
major
task
is
has
been
actually
simulating
some
of
the
optimizations
that
we
can
make
to
the
way
in
which
indexer
agent
interacts
with
the
protocol
contracts
on
chain-
and
you
know
the
the
learnings
that
come
out
of
those
simulations
will
then
result
in
implementation,
work
against
indexer
agent
and
we'll
look
at
kind
of
what
that
might
look
like
in
a
second
and
then
potentially,
you
know
also
out
of
the
the
simulations
we'll
get
some
insights.
I
That
may
mean
that
we
propose
some
gips
to
actually
tweak
the
on-chain
contracts
and
and
potentially
significantly
lower
gas
costs
for
indexes.
So
where
does
this
gas
cost
footprint
actually
come
from
indexes?
Obviously,
index
subgraphs
and
generate
proofs
of
indexing
and
serve
queries,
and
you
know
do
all
of
the
things
they
do
off-chain,
but
naturally
they
need
to
make
unchained
interactions
in
order
to
participate.
I
In
the
protocol,
and
really
you
know
that
that
footprint
comes
from
two
places
and
that
is
managing
allocations,
so
any
subgraph
that
an
indexer
is
indexing,
they
need
to
manage
allocations
for
on
chain,
and
those
allocations
have
a
max
lifetime
of
28
days,
and
so
you
know
closing
and
opening
and
really
managing
that
kind
of
set
of
allocations
across
the
subgraphs
that
are
being
indexed
is
a
significant
source
of
gas
cost
for
indexes
and
then
also
actually
claiming
query.
I
Fee
rebates
from
the
protocol
also
has
a
fair
amount
of
gas
costs
attached
to
it.
So
today,
index
indexer
agent
does
this
in
a
fairly
simple
and
kind
of
dumb
way
where
it
runs
a
reconciliation
loop
so
periodically,
it
basically
asks
you
know
what
are
the
rules
that
my
that
have
been
set?
You
know
what
is
the
set
of
subgraphs
that
I
should
be
indexing,
and
then
it
compares
that
to
what
allocations
have
been
made
on
chain
and
how
old
they
are,
because
we've
got
this
lifetime.
I
You
know
max
lifetime
to
manage
and
then
it
will
effectively
interact
with
the
chain
in
order
to
reflect
the
desired
set
of
allocations
on
chain
and
right
now.
This
loop
is
quite
simple
in
that
you
know
for
any
allocation
that
needs
to
be
say,
closed
and
then
reopen
so
we
we,
you
know,
want
to
continue
indexing
this
sub
graph.
I
It
actually
executes
two
separate
transactions
to
do
that,
so
it
will
first
close
the
allocation
and
then
reopen
the
allocation
and,
similarly
for
query
fee
rebates,
it
will
send
one
transaction
per
allocation
that
it's
claiming
for,
and
so
the
efficiencies
are
really
come
down
to
batching
interactions,
so
indexer
agent
today,
you
really
have
like
a
one-to-one
mapping
between
an
interaction
with
the
protocol,
like
closing
an
allocation
and
transactions,
and
so
if
we
can
batch
together
many
kind
of
individual
protocol
interactions
like
opening
or
closing
and
opening
an
allocation
into
a
single
transaction.
I
There
is
a
fair
amount
of
of
gas
to
be
saved
for
each
interaction
that
we
include
in
the
batch.
Now.
The
on-chain
contracts
actually
already
have
functions
to
facilitate
this
kind
of
optimized
interaction,
so
that
the
two
kind
of
main,
the
main
functions
that
I'm
looking
at,
are
close
and
allocate
and
close
allocation.
Many
and
if
we
kind
of
visualize
what
making
that
reconciliation
loop,
smarter
might
look
like
if
you've
got
the
old
set
of
allocations
and
the
new
set
of
allocations.
I
There's
going
to
be
some
set
of
overlap
between
the
subgraphs
that
we
were
indexing
before
and
the
sub
graphs
that
we
want
to
continue
indexing
and
so
those
for
those
sub
graphs.
It
makes
a
lot
more
sense
for
indexer
agent
to
to
do
a
close
and
allocate
in
a
single
transaction
rather
than
closing
and
opening
each
one
with
two
separate
transactions.
I
Similarly,
there's
going
to
be
some
set
of
churn
in
the
subgraphs
that
an
indexer
decides
to
index
right
like
bad
sub
graphs
are
going
to
fall
off
and
great
new
great
sub
graphs,
as
perhaps
they're
surfaced
by
the
curation
market
are
going
to
kind
of
surface,
and-
and
you
know,
new
allocations
will
will
need
to
be
opened.
For
those,
and
this
loop
might
look
a
little
bit
smarter
like
this.
I
So
rather
than
submitting
a
single
transaction
for
each
allocation
that
we
want
to
close
and
not
reopen,
we
can
just
close
all
of
them
in
one
batch
with
close
allocation,
many
and
then
for
any
subgraphs.
We
want
to
continue
indexing
today,
we'll
probably
implement
a
replacement
where
we
essentially
send
n
number
of
close
and
allocate
transactions.
I
So
this
is
an
example
of
close
and
allocate
actually
being
used
on
mainnet
this
this
particular
one
was
was
closed
and
reallocated
nine
days
ago
and
the
you
know,
I
I
think
that's
maybe
the
fourth
or
fifth
reallocation
that
that
has
been
done
that
way
and
that's
consistently
saving
about
10
percent
in
gas
usage
just
by
batching
those
two
interactions
into
one
transaction
work
on
simulating.
I
You
know
a
kind
of
more
comprehensive
set
of
scenarios,
as
has
begun
and
and
for
anyone
interested
I'm
using
brownie
and
ganache
to
do
that
and
initially
this
is
going
to
be
used
to
just
test
different
subgraph
churn
scenarios
and
the
and
and
the
kind
of
impacts
of
batching
in
different
ways,
but
this
is
also
cool
because
we
can
use
this
to
test
theoretical
contract
upgrades.
I
I
just
want
to
say
thank
you
so
much
to
the
foundation
being
funded
to
work
on
really
interesting.
Problems
is
awesome,
and
I
just
say
to
any
other
contributors
that
are
interested
in
getting
involved,
apply
for
a
grant.
It
may
feel
daunting,
but
the
team
is
really
friendly
and
the
process
is
a
lot
more
simple
and
smooth
than
you
might
expect.
Thanks
again,.
A
Awesome,
thank
you
so
much
chris
and
to
all
of
our
grantees.
I
couldn't
have
ended
it
off
with
a
better
note.
I
think
we've
run
out
of
time
today
for
q
a
so.
If
you
have
any
outstanding
questions,
please
head
to
the
forum
and
edunode
foundation,
and
other
teams
can
answer
them
and
we'll
make
sure
to
end
off
with
some
q
a
time
next
time.
I
think
that's
it
for
now.
So
thank
you.
Everyone
for
joining
us
for
the
second
protocol,
town
hall.