►
From YouTube: The Graph - Q1 2023 Quarterly Participant Update
Description
The Graph ecosystem continues to iterate to support the decentralized web.
The latest Graph Participant Update covers highlights from Q4 2022 of The Graph, including key stats, announcements and milestones. Watch the recording and learn about all the building and innovating happening within The Graph Ecosystem!
The Graph is the indexing and query layer of web3. Anyone can build and publish open APIs, called subgraphs, making data easily accessible.
Follow The Graph
Twitter: https://twitter.com/graphprotocol
Instagram: https://instagram.com/graphprotocol
LinkedIn: https://www.linkedin.com/company/thegraph/
GitHub: https://github.com/graphprotocol
Website: https://thegraph.com
A
All
right
well
welcome
to
the
q1
graph
2023
participant
update.
We're
super
excited
to
share
everything
that
has
happened
over
the
last
quarter
with
you
all,
and
also
chat
a
little
bit
about
our
goals
for
Q2
this
quarter.
So
just
a
quick
reminder:
we
are
recording
today's
call,
if
you
don't
want
to
be
recorded,
make
sure
to
turn
off
your
video
change.
Your
name,
and
there
is
a
question
link
so
feel
free
to
drop
any
questions
throughout
that
presentation
throughout
this
presentation
and
we
will
get
to
it
at
the
end.
A
So
what
a
journey
it's
been
I
was
I
was
preparing
for
today's
presentation.
I
was
looking
at
the
data
and
we
are
at
over
550
billion
queries
across
the
hosted
service
and
the
decentralized
network.
We
don't
really
talk
about
the
hosted
service
anymore,
just
because
all
eyes
are
on
the
decentralized
network
and
a
really
big
goal
is
to
get
a
lot
of
those
queries
that
are
on
the
hosted
service
over
to
the
network.
But
it's
just
exciting
to
kind
of
take
a
moment
and
and
look
at
how
far
we've
come.
A
A
As
always,
there
are
no
forward-looking
statements
in
this
document
or
presentation
for
those
of
you
who
don't
know
I'm
taking
Klein
I'm
the
co-founder
of
edge
node
I
lead
business
and
we
have
closed
out
a
very
strong
quarter.
This
quarter
we're
seeing
query
fees
at
the
highest
point.
We've
ever
seen
in
the
USD
equivalent
and,
what's
really
exciting,
is
just
the
opportunity
that
there
still
is
to
come.
There's
almost
a
thousand
sub
graphs
on
the
decentralized
network.
A
Some
of
those
are
not
yet
fully
in
production
and
there's
over
three
thirty:
nine
thousand
lifetime
queries
on
the
hosted
or
Lifetime
subgraphs
on
the
hosted
service,
so
we're
excited
to
continue
helping
applications
get
over
to
the
decentralized
network
and
we'll
we'll
talk
a
little
bit
about
the
evolution
of
that
in
this
presentation.
A
So
there
have
been
some
major
Milestones.
This
quarter,
one
more
of
a
vanity
metric,
but
still
exciting
is
the
graph
made
fortunes
crypto
top
40
list.
There
are
eight
different
categories
in
the
crypto
space
and
the
graph
came
in
third
in
the
data
category
and
Fortune
is
just
a
tier
one
media
publication.
So
it's
exciting
to
see
this
kind
of
recognition
and
also
number
one
we're
coming
for
you,
and
with
that
we
saw
2.5
million
GRT
query
fees.
A
This
was
actually
a
goal
on
the
business
side
for
the
the
end
of
q1
and
actually
right
at
the
end
of
q1.
We
we
hit
this
metric,
so
it
was
pretty
exciting
and
in
a
celebratory
moment.
So
what
does
that
look
like?
This?
A
Is
the
growth
that
you
can
see-
and
this
is
just
in
terms
of
the
GRT
Evolution
across
polygon
and
now
arbitrum
is
in
the
mix
and
with
this
what's
exciting,
is
that
the
billing
balance
is
is
almost
double
what
the
GRT
query
fees
are,
so
this
is
showing
that
more
developers
and
applications
are
filling
up
their
balances,
their
GRT
balances,
to
make
sure
that
they
have
enough
spend
when
their
applications
are
used.
A
So
it
just
shows
kind
of
the
Trust
In
in
the
billing
process
that
we've
been
working
hard
on
and
also
with
this,
it's
important
to
point
out
that
there's
41
quarter
over
quarter
growth
in
USD
terms
at
the
end
of
q1.
So
this
is
just
showing
more
and
more
adapt
usage
on
the
decentralized
network,
helping
developers
and
applications
kind
of
decentralize
their
their
API
layer.
A
So
with
that,
there's
some
exciting
stats,
so
participants
are
up
and
to
the
right
across
subgraphs,
query
fees,
indexers
delegators
we're
seeing
a
lot
of
growth
across
all
of
these
metrics.
So
on
the
subgraph
side,
26
quarter
over
quarter
growth,
and
this
actually
beat
last
quarter's
growth
of
24
quarter
over
quarter
growth.
So
it's
exciting
to
see
that
increase
at
a
more
Progressive
rate
when
it
comes
to
the
query
fees
480k
at
41
quarter
over
quarter.
A
So
this
is
across
all
Active
network
subgraphs
and
then
on
the
indexer
side,
the
indexer
community
has
really
grown
with
58
quid
over
quarter
increase
big
a
lot
of
credit
to
the
mips
program
here,
just
as
we
go,
multi-chain
more
and
more
indexers
are
joining
the
ecosystem
to
help
serve
queries,
and
so
you
know
web
2
today
has
very
few
indexers
right.
You
have
a
few
large
players,
they
kind
of
get
to
decide
what
data
is
indexed
and
within
the
graph
ecosystem,
there's
over
460
indexers
that
are
serving
queries
and
indexing
data.
A
So
you
know
if
one
person
doesn't
want
if
one
indexer
doesn't
want
to
index
the
data,
that's
okay,
because
there'll
be
others
to
index
the
data
and
then
on
the
delegation.
Side
delegators
only
a
slightly
increased,
but
we're
grateful
to
welcome
new
delegators
to
the
ecosystem
and
we've
actually
closed
out
at
the
largest
delegations
flowing
in
with
two
two
billion
DRT
and
active
delegation
so
really
exciting,
and
our
move
to
Arbitron
will
make
the
barrier
to
entry
on
delegating
indexing
and
curation
lower.
A
So
we
expect
to
see
more
delegation
than
and
with
that
I
just
want
to
double
click
into
Missouri's
q1
state
of
the
graph
report.
A
lot
of
up
into
the
right
metrics,
but
what's
really
exciting
here,
is
that
the
network
is
that
the
network
distributed
the
highest
number
of
indexer
awards
to
delegators
ever
on
record.
So
it's
exciting
to
see
that
more
rewards
are
going
to
indexers
and
also
to
to
delegators
and
they're
stewarding
that
delegation
to
the
the
delegators
on
this
Slide
the
hosted
service
versus
the
network.
A
It's
super
exciting
to
see
kind
of
things
happening
on
par
or
better,
with
with
the
hosted
service
so
on
the
uptime.
It
really
is
unbeatable
with
460
indexers
serving
queries
at
any
time.
Also
I
want
to
point
out
the
median
latency
here,
which
is
really
impressive,
as
well
as
the
the
query
success
rate
and
one
piece
on
this
is
just
the
quality
of
service.
A
I
was
chatting
to
Brandon
the
other
day
and
we're
seeing
quality
of
service
in
three
nines
and
people
told
us
that
three
nines
on
quality
of
service
on
a
decentralized
network
would
be
impossible
to
achieve.
And
here
we
are,
you
know
beating
that
and
I
think
that
that's
something
that
everyone
here
should
should
really
be
proud
of,
and
you
know
I
was
when
I
was
chatting
with
Brandon.
A
He
was
sharing
that
you
know
it's
not
uncommon
uncommon
for
centralized
providers
to
see
four
nines
or
five
Nines
in
the
traditional
world
and
I
was
like
well
I
want
to
beat
those
and
Brandon's
like
well.
We
don't
really
have
to
beat
those
it's
it's
it's
okay
to
kind
of
be
on
par,
because
when
it
cut
you
know
you
don't
want
to.
You,
don't
need
to
be
10x
better
than
centralized
providers
when
we're
a
thousand
X
better
in
other
areas
like
openness,
composability,
Network
effects,
public
goods,
Unstoppable
applications.
A
So
all
of
these
things
really
shine
through
when
the
product
is
the
or
the
core
offering
is,
is
on
par
with
centralized
providers
and
Brandon
has
a
really
great
analogy
around
electric
cars
and
so
at
Brandon
I
want
to
pass
the
torch
to
you
to
kind
of
double
click
into
that
analogy.
B
Yeah
thanks
Tegan
yeah,
so
one
of
the
analogies
I've,
given
the
core
developer
ecosystem
is
to
kind
of
think
of
the
choice
of
building
on
decentralized
infrastructure
like
that
of
purchasing
an
electric
vehicle
and
in
both
those
instances,
there's
all
these
other
societal
benefits
and
sort
of
secondary
benefits
that
you
know
Tegan
was
alluding
to.
But
if
you
know
the
total
cost
of
ownership
of
a
a
car,
you
know
an
EV
is
not
competitive
with
the
gas
car
or,
if
the
you
know,
convenience
of
charging
networks
isn't
as
convenient.
B
As
you
know
the
gas
stations,
then
people
aren't
going
to
make
that
choice
to
switch
to
EB,
but
those
aren't
basis
of
competition
where
EVS
need
to
be
10x
better
right.
It's
not
that
charging
stations
need
to
be
10x
more
convenient
than
gas
stations.
It's
not
that
the
total
cost
of
ownership
needs
to
be
10x
better
than
you
know,
that
of
gas
vehicles.
It's
that
they
need
to
be
table
Stakes
competitive,
so
that
all
these
other
benefits
of
you
know
in
this
case
EVS.
B
But
in
our
case
you
know,
building
on
decentralized
infrastructure
can
shine.
You
know
one
of
the
predictions
that
we
made
at
graph
day
a
year
ago,
is
that
you
know
the
network
would
actually
the
decentralized
network
would
actually
be
better
than
any
centralized
alternative
and
I'll
point
out
that,
even
though
you
know
many,
traditional
SAS
products
do
get
four
and
five
nines,
which
were
really
close
to
on
the
decentralized
network.
B
The
best
comparison
that
we
have
for
this
particular
type
of
service
today
is
the
hosted
service,
and
this
problem
area
is
unique
because
there's
so
many
Upstream
dependencies
on
the
blockchains
themselves
and
on
RPC
providers
and
so
on
and
so
forth.
That
can
lead
to
those
that
can
impact
those
quality
service
numbers
so
already
today.
The
best
centralized
service
that
we
know
of
for
solving
this
use
case,
which
is
the
graphs
hosted
service,
is
now
being
outdone
by
the
decentralized
network
across
all
the
main
quality
of
service
metrics
that
we're
tracking.
A
Amazing,
thank
you
Brandon
and
with
that
I
want
to
look
at
some
of
the
network
statistics
here.
So
this
is
super
exciting
to
see
just
the
rate
at
which
the
speed
is
increasing.
So
it
took
60
days
to
go
from
400
sub
graphs
to
500
subgraphs
on
the
decentralized
network
it
took
77
days
to
go
from
500
to
661
days
to
go
from
600
to
700
and
a
spoiler
alert.
A
The
team
is
coordinating
with
Dows
across
the
graph
ecosystem
to
really
speed
up
migrations,
and
it's
it's
starting
to
show
with
with
this
this
rate
at
which
it's
increasing-
and
you
know
we're
sitting
here
at
almost
a
thousand
subgraphs
on
the
decentralized
network.
Not
all
of
them
are.
A
Some
of
them
are
still
testing
and
they're
not
fully
using
their
front
end
in
production,
so
there's
still
opportunity
there,
but
this
really
is
the
tip
of
the
iceberg
and
we're
excited
to
see
this
flywheel
effect
continue
now
double
clicking
into
some
of
the
subgraphs
on
the
network.
This
really
is
who
the
spotlight
should
be
on
it's.
A
We
provide
this
infrastructure
so
that
these
applications
can
flourish
and
and
shine,
and
there
are
over
150
subgraphs
that
are
used
in
production
for
on-chain
data
on
the
decentralized
network
today,
and
one
thing
that's
really
exciting
here-
that
is
an
opportunity
for
us
is
that
third-party
data
consumers.
So
that's
not
the
applications
themselves.
That's
other
companies
or
other
projects
querying
the
open
subgraphs.
A
That
is
a
big
percentage
of
the
traffic
on
the
hosted
service
and,
as
we
get
closer
to
fusion
and
to
phase
two,
we
can
really
start
to
push
those
third-party
queriors
to
the
decentralized
network
which
is
exciting
and
a
lot
of
them
are
actually
hungry
to
start
paying
for
queries
on
the
network
and
some
of
them,
maybe
don't
trust
some
things
that
are
quote
unquote
free.
A
So
that's
a
really
exciting
opportunity
for
us
and
and
more
to
share
on
that
soon,
and
then
we've
heard
from
you
that
demystifying,
the
token
economics
is
something
that
many
of
you
have
asked
for,
and
so
we
spend
some
time
working
on
this
image
and
clarifying
some
of
the
token
economics
in
the
documentation.
A
So
thank
you
for
that
feedback,
and
so
here
I
just
want
to
point
out
that
you
know
how
did
the
GRT
work
utility
token
works
and
it's
important
to
state
that
it
is
a
utility
work
token,
so
only
purchase
what
you
intend
to
use
on
the
network.
A
But
this
is
really
what
the
incentive
structure
looks
like
and
there's
incentives
for
good
behavior
and
then
disincentives
for
for
bad
behavior
across
the
index
or
curator
and
delegator
ecosystem
and
I'm
always
happy
to
kind
of
dive
into
any
of
this
with
you.
If
you
want
to
do
a
deep
dive
session.
Just
let
me
know
with
that
the
progress
on
multi-chain
on
the
network,
so
this
really
hasn't
been
a
big
initiative.
There
are
now
five
chains
on
the
decentralized
network
we've
been
helping
applications
on
these
chain
migrate.
A
We've
also
been
working
with
a
lot
of
these
teams
to
make
a
lot
of
noise
both
at
the
chain
up
at
the
chain
level,
and
also
at
the
application
Level
and
there's
progress
to
unblock
Phantom
optimism
and
polygon
on
the
network.
This
quarter
fingers
crossed
and
we
we
are
there
to
kind
of
help.
White
Glove
these
applications
over
to
the
network,
but
there
are
still
a
long
ways
to
go
it's.
A
This
is
only
8
out
of
40
chains,
so
we'll
we're
continuing
to
to
try
and
unblock
some
of
the
the
chains
on
the
network,
and
if
you
have
opinions
on
which
chains
you
would
like
to
see
next
in
Wave
2,
please
let
us
know
and
with
that
I
want
to
pass
it
over
to
Brandon
Ramirez,
our
interim
CEO
Edge
node,
and
the
original
founder
of
the
graph.
We'll
chat
a
little
bit
about
some
of
the
improvements
launches,
Integrations
and
tech-in
production.
B
Yeah,
so
I
want
to
kick
off
by
talking
about
a
bunch
of
UI
and
like
developer,
experience
improves
that
we've
been
making
the
graph
Network.
You
know.
So
we
talked
about
quality
of
service
and,
like
the
huge
Milestones
we've
made,
you
know
earlier
in
the
call,
but
you
know
within
the
graph
core
Dev
ecosystem
we've
typically
talked
about.
B
You
know:
quality
of
service,
cost
of
service
and
developer
experience
as
being
the
three
pillars,
so
we
need
to
be
table
Stakes,
well,
all
those
not
just
all
the
service
and
so
you're,
seeing
a
lot
of
effort
being
put
into
the
developer
experience
now
that
we've
hit
a
lot
of
these
key
quality
of
service
Milestones
so
taking.
If
you
can
go
to
next
slide.
B
So
a
big
one
here
is
arbitrum
migration
and
there's
kind
of
a
few
layers
to
this.
But
you
know
I
just
want
to
emphasize
how
ambitious
it
is
to
upgrade
a
protocol
like
the
graph.
While
it's
live
and
running.
B
You
know
it
may
not
be
quite
at
the
level
of
complexity
of
you
know,
ethereum's
merge,
but
it
really
is
like
upgrading
an
airplane
while
you're
flying
it
and
I'm
super
proud
of
the
you
know:
core
developer
teams,
smart
contract
Engineers
working
on
this-
that
have
sort
of
flawlessly
been
handling
the
migration
from
layer,
one
to
Layer
Two
for
the
graph
right
now
we
have
a
full
running
version
of
the
protocol
on
arbitrum
and
we
have
turn
on
five
percent
of
the
networks.
B
Indexing
rewards
are
now
being
directed
towards
arbitrim,
there's
a
Gip
Gip
52
in
the
forums,
if
you're
interested
that
actually
outlines
the
schedule
to
gradually
shift
all
the
indexing
rewards
that
are
going
to
the
L1
protocol.
To
eventually
be
you
know,
100
directed
towards
the
L2
protocol
running
on
arbitrum.
B
This
is
super
exciting
already
in
in
the
after
this,
like
very
first
phase
of
five
percent
indexing
rewards
we've
seen
quality
of
service
parity
on
arbitrum
for
subgraphs
between
you
know,
subgraphs
that
are
on
L2
and
subgraphs
that
are
on
L1
and
we've
seen
massive
massive
cost
improvements.
So
anecdotally,
some
of
the
the
key
transactions
that
indexers
have
to
do
like
opening
and
closing
allocations.
We've
seen
be
130
times
cheaper
on
arbitrum
than
they
were
on
ethereum
L1,
and
these
are
not
only
savings
that
goes
into
the
indexers.
B
You
know
bottom
line,
but
they
can
invest
that
in
infrastructure
and
more
meaningful
Capital
expenditures
that
drive
quality
of
service
in
the
network.
They
can
pass
on
pass
on
savings
to
Consumers
and
subgraph
developers,
but
importantly,
it
also
makes
the
network
far
more
dynamic,
because
now,
when
an
indexer
is
thinking
about
hey
supporting
the
latest
subgraph
that
you
know
just
got
deployed
or
you
know,
switching
you
know
stake
from
one
sub
graph
to
another.
They
don't
have
to
think
every
time.
I
take
one
of
those
actions.
I'm
taking
this
huge
hit.
B
In
terms
of
you
know,
gas
costs.
Finally,
I
will
say
that
for
many
of
the
future
developer
experience
improvements
that
you're
gonna,
you
know
that
you've
heard
us
talk
about
and
you're
going
to
continue
hearing
us
talk
about
having
cheaper
gas
costs
as
a
huge
enabler.
B
So
things
like
doing
on-chain
subscriptions
and
recurring
payments
that
make
the
developer
experience
for
billing
feel
much
more
like
a
SAS
experience
wouldn't
be
possible
today
on
ethereum
L1
in
that
gas
cost
environment,
the
arbitrum
has
made
those
things
possible
next
slide,
so
the
first
step
of
many
actually.
B
This
is
in
the
first,
because
you
know
earlier
this
year
we
streamlined
the
the
movement
of
you,
know
tokens
from
L1
onto
the
L2
building
contract
in
an
arbitrum
but
kind
of
the
next
step,
in
that
this
process
of
making
the
billing
experience.
Really
you
know,
table
Stakes
competitive
with
any
SAS
product
is
making
sure
that
we're
really
addressing
the
needs
of
the
projects
that
are
still
sort
of
stuck
in
the
Legacy
world
of
using
Banks
and
credit
cards
as
their
primary
form
of
like
managing
expenses.
B
So
is
a
partnership
that
was
announced
earlier
this
year.
They
are,
they
have
a
Fiat
on-ramp,
that's
integrated
directly
into
the
subgraph
studio,
and
that
lets
people
quickly
use
a
credit
card
to
basically
get
GRT
directly
on
arbitrum.
That
can
be
then
immediately
be
used
for
billing
and
for
paying
for
queries
next.
B
So
another
big
announcement
we
had
earlier
this
year
was
file
data
sources.
These
are
now
live
on
the
network
and
experimental
mode.
This
has
been
a
long
long
requested
feature
for
supporting
a
lot
of
use
cases
in
particular
nfts.
B
It
required
a
pretty
deep
re-architecture
of
the
way
that
we
even
do
indexing
in
you
know
subgraphs.
You
know,
because
previously
sort
of
any
time
a
file
became
available
or
unavailable,
you
would
have
to
roll
back
the
entire
game.
History
that
to
the
point
where
that
file
was
first
indexed.
B
Now
we
sort
of
have
concurrent
indexing
jobs
for
the
on-chain
data
and
the
off-chain
data,
so
we
can
kind
of
continue,
syncing
the
on-chain
data
and,
as
files
become
available
on
unavailable,
such
as
the
ones
that
are
used
for
nft
metadata,
the
subgraph
can
kind
of
just
grab
that
metadata.
Add
it
to
the
state
of
the
subgraph
and
kind
of
continue
business
as
usual.
B
For
now,
this
feature
is
labeled
experimental
because
the
graph
doesn't
yet
have
a
Oracle
for
data
availability
that
we
really
rely
on.
You
know
the
the
ecosystems
looked
at.
Projects
like
Celestia
and
eigenlayer
is
potentially
being
part
of
the
solution
space
there,
but
in
the
meantime,
we
expect
subgraph
developers
to
sort
of
use
indexers
that
have
high
reputation
and
have
a
good
track
record
of
you
know
responding
to
queries
across
other
subgraphs.
B
Next,
please
another
big
development
for
quality
of
life.
Improvement
we've
made
recently
is
the
activity
feed.
So
this
is,
you
know,
I
would
say
not
part
of
like
one
of
our
big
strategic.
You
know
initiatives,
but
it's
been
something.
That's
been
super
highly
requested.
B
It's
something
that
really
takes
developers
out
of
the
flow
of
using
the
graph
right.
If,
like
you're,
coming
up
on
a
subgraph
and
like
to
understand,
what's
happening
to
that
subgraph,
then
you
need
to
go
out
to
you
know
ether
scan
or
you
know
some
other
source
of
data
to
really
understand
the
activity
on
that
subgraph.
B
You
know
whether
it's
indexers
staking
on
those
subgraphs,
whether
it's
upgrades
to
the
sub
graphs
curation
activity
Etc
next,
so
we
have
some
exciting
new
subgraph
studio,
Integrations
to
announce
and
go
to
the
next
slide.
You
know.
One
of
the
things
that
we
wanted
to
make
sure
was
that
we
weren't,
even
as
we
do
this
giant
push
to
the
decentralized
network.
B
We
recognize
that
you
know
the
blockchain
ecosystem
isn't
standing
still
and
we
didn't
want
to
leave
new
chains
that
need
you
know,
data
indexing
and
querying
kind
of
in
the
Lurch,
and
so
we
made
the
decision
to
integrate
polygon
Hermes.
Now,
ZK
evm
ZK,
sync
and
coinbase's
newly
announced
base
layer
two
directly
into
the
subgraph
studio
and
currently
it's
using
the
studio,
a
non-rate
limited
version
of
the
Studio's
sandbox
infrastructure,
so
it
still
hosted
subgraphs.
B
But
now
that
they've
been
integrated
directly
into
the
studio
when
these
chains
eventually
migrate
to
the
decentralized
network,
it'll
basically
be
a
one-click
type
thing
to
get
those
subgraphs
into
the
decentralized
network.
So
it's
a
much
lower
lower
bar
to
cross
than,
for
example,
the
chains
were
currently
migrating
from
the
the
hosted
service.
B
And
we
have
some
some
new
tech
and
production
to
talk
about,
and
this
this
is
a
really
exciting
one
that
I
think
people
have
been
following
closely.
You
know
ever
since
we
talked
about
this
at
graph
Day
last
year,
so
you
know
I
think.
Last
last
graph
day
we
announced
the
developer
preview.
For
such
you
know,
this
was
a
new
new
project
that
stemmed
from
I
believe
it
was
the
Cancun
core
developer
Retreat
earlier
last
year
in
like
January,
and
it's
made
an
incredible
amount
of
progress
very
very
quickly.
B
The
latest
updates
are
that
the
substreams
team
released
it
hosted
endpoint
for
basically
production
ready
usage
of
substreams.
Now
these
are
still
hosted,
but
what
we're
going
to
learn
from
projects
using
these
endpoints
in
production
is
going
to
be
super
super
important
for
us
to
figure
out
the
best
way
to
integrate
and
design
the
economics
for
supporting
substreams
natively
in
the
decentralized
network.
Some
of
you
may
have
seen
the
Gip.
The
number
escapes
me,
but
it's
a
world
of
data
services,
which
is
a
proposal
in
the
core
Dev
ecosystem.
B
That's
hinted
at
making
the
graph
multi-service
so
instead
of
just
having
subgraphs
as
a
single
service,
think
about
supporting
fire
hose
substreams,
sub
graphs,
key
value
stores,
maybe
SQL.
So
there's
really
a
lot
of
possibilities
here,
and
this
is
an
exciting
First,
Step
I'll
say
the
first
place
where
you're
going
to
see
this.
Adding
a
lot
of
value
to
the
network
near
term,
isn't
what
we're
calling
substreams
enabled
subgraphs.
So
this
is
using
the
existing
subgraph
framework
with
substreams
as
a
data
source
into
those
subgraphs
and
already
there
we're
seeing
so
in
that
pattern.
B
You
know
you
you
end
up
with
some
because
of
the
database.
You
end
up
with
some
load
tends
to
be
the
bottleneck.
You
know
loading
entities
into
the
database,
so
you
don't
see
the
full
100x
right
away
necessarily,
but
that
gets
you
like
10
to
30X,
because
you
know
the
database
takes
time
to
build
the
index
indices.
B
You
know
manage
you,
know,
entity
version
snapshots
for
time,
travel,
queries
and
stuff
like
that,
but
this
is
already
super
exciting
we're
seeing
really
notable
subgraphs
that
used
to
take
weeks
to
index
now,
sync
in
a
matter
of
hours
and
to
one
of
the
questions
that
I
already
saw
come
in
in
the
the
chat.
B
This
has
been
one
of
the
biggest
requested
features.
You
know
indexing
time
to
get.
You
know
certain,
especially
the
high
data
projects
from
the
hosted
service
into
the
decentralized
network.
So
even
before
we
see
the
full-fledged,
you
know,
world
of
data
services
in
the
decentralized
network,
I
think
substreams
enabled
subgraphs
is
going
to
be
a
big.
You
know
piece
of
the
narrative
that
you'll
be
hearing
a
lot
more
about.
A
Thank
you
appreciate
it
and
with
that
a
lot
is
planned
for
Q2
and
we
want
to
share
a
little
bit
about
the
priorities
and
goals.
You
know
a
lot
happen
in
q1,
but
we're
pushing
to
ship
even
more
at
this
quarter,
so
with
some
of
the
the
biggest
priorities,
as
mentioned
getting
polygon,
Phantom
and
optimism
on
the
decentralized
network,
as
well
as
a
plan
for
the
next
wave
of
chains
to
go,
live
on
the
network,
also
reoccurring
payments
and
subscriptions.
A
This
will
be
a
really
big
one
right
now,
a
lot
of
developers
and
and
individuals
across
the
ecosystem
are
manually,
adding
GRT
to
their
balances
to
for
their
dap
usage,
and
this
will
allow
them
to
take
that
away.
So
it
can
just
flow
seamlessly
similar
to
an
experience
you
have
with
with
a
credit
card
today
the
Arboretum
layer,
2
integration,
which
we
touched
upon.
A
This
will
really
lower
the
barrier
to
entry
for
delegators,
indexers,
curators
and
it'll
also
allow
us
to
start
doing
some
Advanced
tasks
around
delegation
with
some
of
like
coinbase
earn
and
coin
market
cap
earn
as
an
example
and
then,
as
Brandon
mentioned,
substreams
enabled
subgraphs.
This
is
a
big
one
as
a
lot
of
the
high
production.
A
Of
course,
migrations
is
a
huge,
huge
priority
that
that
rolls,
over
from
from
the
last
quarter
and
I'll
also
add
we're
working
on
Fusion,
which
is
just
work
with
inside
the
subgraph
studio,
and
then
unblocking
phase
two
or
phase
one
and
phase
one
of
the
sunrise
of
decentralized
data
is
when
we
will
no
longer
allow
applications
or
yeah
subgraphs
to
spin
up
on
the
hosted
service
across
different
chains
that
are
ready
for
that
and
with
that,
I
want
to
pass
it
over
to
Sam
green,
the
co-founder
and
head
of
research
at
semiotic,
one
of
the
core
devs
within
the
graphic
ecosystem,
to
talk
about
some
of
the
work
they're
doing
on
the
AI
and
machine
learning
side.
C
Thanks
Tegan
hi
everyone,
so
we've
been
working
on
AI
related
projects
for
the
past
few
years.
We
have.
We
have
two
tools
that
are
currently
deployed
today
in
the
protocol,
and
we
have
a
lot
of
our
top
indexers
are
using
these
tools,
the
first
one
that
we
that
we
are
building
and
what
that
is
released
and
we
continue
to
improve,
is
called
Auto
Agora.
So
Otto
Agora
is
a
reinforcement,
learning
based
tool
and
RL
by
the
way
is
the
family
of
it's
from
the
family
of
AI.
C
Auto
Agora
is,
is
used
for
indexers
to
automatically
set
query
prices,
so
basically
what
it
does
is
it
it
models,
the
computational
costs,
the
queries
take
on
the
indexer's
infrastructure
and
then
automatically
sets
prices
based
on
those
resource
costs
and
it
adjusts.
The
reinforcement
learning
aspect
is
that
on
the
fly
it
adjusts
prices
according
to
market
demand,
and
so
it
can
increase
or
decrease
prices
to
maximize
the
indexer's
revenue
and
be
more
competitive.
C
The
in
the
protocol
indexers
have
to
choose
which
subgraphs
to
index
and
this
tool
choosing
which
subgraph
to
index
is
a
it's
a
complex
decision-making
process
where
you
actually
have
to
do.
A
lot
of
math
and
devops
have
a
bunch
of
devops
knowledge
to
know
which
index
subgraphs
index
it's
a
hard
problem
and
this
tool
automates
all
of
the
it
takes
all
of
the
different
factors
in
it:
automates
the
process
for
indexers
and
what
this
does
for
the
protocol
or
what
it
does
for
the
indexer
first.
C
Is
it
it
Maxes
maximizes
their
revenue,
because
indexers
are
rewarded
based
on
following
indexing,
the
subgraphs
that
have
the
indexing
subgraphs
proportional
to
the
amount
of
duration
signal
on
some
graphs.
So
this
helps
them
do
that
and
similarly,
it
helps
the
network,
because
it
makes
sure
that
all
of
the
subgraphs
that
have
signal
have
the
appropriate
amount
of
indexing
on
them.
C
And
finally,
the
network
network
testing
that
we've
done
so
or
we
have
ai
tools
that
are
used
for
testing
the
protocol,
and
so
when
we
build
these
tools,
these
AI
tools,
we
have
to
simulate
certain
aspects
of
the
protocol
to
make
sure
that
our
tools
are
working
and
in
the
process
we
have
in
the
past,
discovered
We've.
C
We
basically
stress
test
the
network,
as
our
agents
are
learning
and
we've
discovered
weaknesses
that
we've
then
been
able
to
fix,
and
we
will
continue
to
do
that
as
we
as
we
continue
with
our
tools
tool
development.
So,
finally,
maybe
what
is
exciting
to
a
lot
of
people?
C
These
days
are
large
language
models
like
chat
gbt,
and
the
graph
is
basically
everybody
in
the
graph
ecosystem
is
excited
about
the
prospect
that
llms
bring
so
specifically,
some
low-hanging
fruit
that
we
see
are
are
basically
providing
a
natural
language
way
to
query
the
graphs
data.
So
imagine
a
future
where,
in
any
language,
any
language
that
you
speak,
whether
that's
English
Spanish
Mandarin,
you
name
it
you'll,
be
able
to
ask
a
question
about
web3
data
and
then
under
the
hood.
C
We
will
convert
that
that
question
in
natural
language
to
all
of
the
technical
queries
specifically
graphql
today,
and
then
we
will
return
the
results
in
a
understandable
form
to
to
the
person
who's.
Looking
for
the
data
and
these,
these
sorts
of
approaches
are
going
to
really
expand
the
number
of
people
that
we
can
serve
our
data
to
so
we're
at
semiotic.
C
We
are
working
on
R
D
to
support
this
we're
also.
We
are
also
collaborating
with
other
members
of
the
graph
ecosystem
to
support
their
tool
development,
so
you're
going
to
see
a
lot
of
similar
llm
based
Tooling
in
the
ecosystem,
in
the
in
the
in
the
short
term
and
and
in
the
years
to
come.
C
So,
finally,
if
you'd
like
to
learn
more
about
what
we're
doing
today
and
what
we're
planning
to
do,
there
is
a
link
at
the
bottom
of
this
page
to
a
blog
post
that
we
recently
wrote
so
I'm
now
passing
it
over
to
Eva
director
of
the
graph
Foundation
to
talk
about
the
roadmap,
a
multi-chain
on
the
network
and
gips
before
Tegan
shares
some
thank
yous
and
asks
and
then
we'll
jump
into
q.
A
thank
you.
D
D
So
looking
back
and
forward
on
our
roadmap,
a
bit
of
this
has
been
covered
already
by
Brandon,
Tegan
and
Sam,
but
just
wanted
to
make
a
few
more
highlights.
So
you
know,
we've
got
these
five
working
groups
that
we
work
across
so,
for
example,
substream's
work
comes
out
of
the
data
and
apis
working
group.
D
One
thing
I
want
to
highlight
just
off:
the
top
is
graph
cast
and
POI
radio,
so
this
is
basically
like
a
gossip
Network
for
the
graph
making
it
much
easier
for
indexers
to
communicate
with
each
other,
and
you
know
in
the
most
basic
sense.
This
might
just
be
messaging
but
long
term.
We
view
this
actually
as
a
piece
of
the
data
Integrity
story
and
making
sure
that
indexers
are
sharing
the
accurate
data
with
their
users.
D
Brandon
mentioned
some
of
the
billing
improvements,
but
I
don't
know
if
he
quite
highlighted
us
significantly
that
you
know
now
all
developers
can
pay
with
credit
card
and
Fiat
with
the
graph.
So
this
has
been
one
of
the
biggest
pieces
of
feedback
we've
received.
Is
you
know,
developers
wanting
to
access
the
graph
without
having
to
have
GRT
directly,
and
now
they
can
do
that
and
we'll
have
a
lot
more
features
that
come
through.
D
Another
that
I
wanted
to
highlight
here
was
off
chain
data
sources,
so
long
time
coming
has
been
full
support
for
nfts,
meaning
actually
being
able
to
index
the
nft
data
itself,
not
just
the
transaction
data,
and
we
can
do
that
now,
as
well
with
support
for
ipfs
and
we'll
be
adding
other
data
sources
over
time
as
well.
To
make
sure
we
cover
the
entire
use
case.
D
One
last
one
I'll
cover
here
is
standardized
subgraph,
so
you
know,
misari
has
been
working
over
the
past
year
on.
How
do
we
make
it
easier
for
the
entire
world
adapts
to
access
data?
You
know
the
graph
is
obviously
the
infrastructure,
but
even
the
human
elements,
and
so
you
know
with
the
standardized
subgraphs
that
messari
has
built.
There
is
now
a
standard
across
different
categories
and
verticals.
That
makes
it
very
easy
for
different
applications
and
protocols
to
coordinate
around.
D
So
what
do
I
mean
by
that
an
example
would
be
a
standard
definition
for
TDL
and
other
metrics
that
otherwise
are
very
different
across
the
board.
You
know,
if
you
look
at
defy
llama
and
a
few
others,
they
all
kind
of
have
a
different
answer
for
what
TBL
is
so
you
know
we
start
here
with
standardized
subgraphs
and
where
it
ends
up
for
the
users
is
highly
accurate
data
and
we're
making
sure
the
graph
has
all
of
the
highest
most
accurate
data
at
all
times.
D
So
Tegan
touched
on
the
chains
that
we've
already
supported.
I
just
wanted
to
go
over.
What's
next,
so
you
know,
we've
got
eight
chains
throughout
the
mips
program.
Those
three
other
chains-
optimism,
polygon
and
Phantom-
are
coming
very
shortly
and
we
want
to
create
a
much
more
public
process
for
new
chain
integration.
So
you
know:
we've
got
over
40
chains
that
the
hosted
service
supports
a
bunch
of
those
still
need
to
migrate
over
themselves,
and
then
we
still
continuously
get
requests
from
the
community
to
support
new
chains.
D
So
our
goal
is
to
create
a
very
public
Gip
process
such
that
any
indexer
can
then,
you
know,
run
the
chain
after
it's
been
accepted
and
approved
by
the
council
and
then
having
you
know
much
more
service
on
the
network
than
even
the
hosted
service
today,
because
we
are
limited,
you
know
in
in
sort
of
the
resources
that
we
have
to
grow
the
hosted
service
and
our
Focus
really
is
growing
the
network.
The
last
point
on
here
is
collaborating
with
client
teams.
So
maybe
a
call
to
option
is
here.
D
Let
us
know,
because
we've
gotten
a
lot
more
involved,
basically
with
the
client
and
chain
teams,
to
make
sure
that
clients
are
meeting
the
needs
that
indexers
have
and
users
have
so,
for
example,
the
polygon
Aragon
client
wasn't
quite
there
yet
and
we
started
collaborating
with
both
teams
to
make
sure
they
can
serve
polygon
Aragon
users
well,
and
our
goal
basically
is
as
more
of
our
infrastructure
like
fire
hose
comes
out
that
we
can
actually
push
that
Upstream
to
the
client
teams
instead
of
having
the
chain
teams
get
involved
here.
D
D
Bear
markets
are
definitely
build
markets,
and,
if
you
all
highlight
here,
is
you
know,
Gap
52,
that
Brandon
mentioned
that's
the
finally
sorry,
the
final
indexing
rewards
moving
over
and
we've
got
a
staggered
plan
to
make
sure
that
there's
safety
on
the
network
that
users
are
continuing
to
be
served
on
mainnet
as
the
transition
to
L2
occurs,
and
then
we've
also
got
L2
migration
helpers
so
really
making
sure
that
the
entire
indexing
ecosystem
is
robust
during
this
migration.
D
D
I'll
also
highlight
just
the
first
one
Gip
42,
so
this
is
you
know
world
of
data
services.
This
is
sort
of
a
new
direction
that
the
graph
is
taking
where
substreams
and
subgraphs
become
two
data
services,
but
really
there's
so
many
more
opportunities
we
have
going
forward.
So
I
think
this
is
a
really
interesting
Gip
where
it
was.
You
know
specifically
scoping
two
substreams,
but
it's
actually
also
the
start
of
sort
of
a
new
generation
within
the
graph
of
figuring
out.
How
do
we
embed
new
data
services
into
the
network.
A
Thank
you,
Eva.
Okay,
amazing,
with
that,
my
favorite
part
of
the
the
presentation
is
just
the
thank
yous.
So
a
lot
of
you
have
been
super
supportive
and
so
thank
you
to
Kyle
from
multi-coin
reciprocal
Ventures
framework,
dcg
and
fintech
Collective
for
all
of
the
help
over
that
one
very
long
weekend
we
all
had
together
around
and
and
connecting
us
to
tier
One
banking
connections
as
well
as
future
perfect
Ventures.
Thank
you
for
your
guidance
during
operation
choke
point
2.0.
Thank
you
to
John
Choi
for
all
the
suggestions
around
the
narrative
and
strategy.
A
Thank
you
to
Michael,
Craig
and
Ali
from
reciprocal
Ventures
for
all
the
strategy
and
growth
guidance
and
a
special
shout
out
to
Ali
for
the
BD
introduction
shared.
Thank
you
Lubin
for
the
time
during
eat
Denver
and
a
shout
out
to
all
the
graph,
the
the
graph
Advocates
dial,
as
well
as
the
graph
Builders
out
who
are
working
hard
on
migrations.
A
Now
how
you
can
help,
so
you
can
sign
up
for
the
developer
newsletter.
That's
recently
launched.
You
can
also
delegate
on
the
graph
to
help
secure
the
network.
Let
us
know
if
you
need
help
we're
here
to
support
any
of
you
through
that
journey.
I
know
some
of
you
are
even
running
indexers.
So
if
you
have
interest
in
doing
that
feel
free
to
reach
out
to
us
also,
as
Eva
mentioned,
the
fire
hose
integration
Mass
adoption,
we
really
want
to
help
chains
start
running
fire
hose.
A
So
if
there
are
any
chains
that
you
are
connected
to,
please
let
us
know
and
then
we're
hiring
across
the
graph
ecosystem
on
the
edge
and
node
side,
we're
hiring
for
a
VP
of
engineering,
we're
hiring
for
two
business
development
roles,
a
rust
engineer,
a
product,
engineering
manager,
a
senior
product
manager,
UI
and
ux
designer
there's,
actually
a
few
roles
on
the
designer
side
and
then
the
graph
Foundation
is
hiring
a
subgraph
Dev
The
Guild
is
hiring
an
open
source.
Dev
and
semiotic
is
hiring
a
devops
engineer.
A
So
if
you
know
anyone
that
could
build
these
roles,
please
connect
us
and,
of
course,
any
opportunities.
Speaking
podcasts
press
opportunities,
we're
here
for
it.
If
you
want
to
host
an
event
at
the
house
of
web
3
in
San
Francisco,
just
let
us
know,
and
then
we're
always
here
to
take
any
advice
and
feedback
that
you
have
and
we've
been
super
receptive
to
the
feedback
we've
received
so
far.
E
First,
one
is:
are
all
the
sub
graphs
on
the
decentralized
network
being
used
for
data,
and,
if
not,
why
not
take
it
if
it's
okay,
I
can
take
that
I'd
say
that
yeah
yeah
so
I'd
say
definitely
not
as
Tegan
alluded
to
a
bit
near
the
beginning
of
the
call.
Many
teams
actually
need
to
test
very
heavily
before
using
the
network
in
production
and
that
can
take
weeks
to
months
to
most
importantly,
find
the
capacity
but
also
get
comfortable
before
switching
over
and
using
something
in
production.
E
So,
what's
important
is
that
a
large
portion
of
the
subject
subgraphs
on
the
decentralized
network
are
curated.
They
are
synced
by
multiple
indexers
and
are
actively
being
tested
by
dap
teams
around
the
world.
So
this
means
that
it's,
hopefully
only
a
matter
of
time
before
they
migrate
over
the
network
in
the
near
to
midterm
future.
So
the
more
that
published
the
more
they
get
curated
on
the
network,
the
more
exciting
things
become,
even
if
it
takes
time
to
fully
migrate
over
for
any
given
reason.
E
So
I'd
say
no,
but
it's
a
it's
a
great
leading
indicator
on
what's
to
come
with
the
network
in
general.
E
A
Yeah,
so
a
big
one
is
just
the
different
chains
across
the
network.
I
think
a
lot
of
people
are
really
excited
about
polygon
as
an
example
going
live
on
the
network
as
well
as
other
chains,
many
applications
across
many
different
chains,
so
that
is
a
big
one,
but
also
sub
streams,
powered
subgraphs
for
a
lot
of
the
large
high
traffic
applications,
they're
really
waiting
for
that
to
light
up
so
that
they
can
migrate
over
and
on
the
fusion
side.
E
Yeah
I
think
those
two
hit,
like
probably
two
of
the
top
three
but
I'd
say
the
biggest
blocker
by
far,
is
simply
teens
finding
capacity
to
migrate
the
sub
graph
and
get
the
testing
done
and
most
of
the
time
it's
them
just
thinking
that
it
takes
more
time
and
the
process
is
getting
much
simpler,
much
faster.
Not
only
do
it
arbitrum,
but
everything
that
the
courtev
ecosystem
is
working
on
and
other
than
those
three.
E
There
are
smaller
blockers,
such
as
those
related
to
functionality
and
feature
parity
with
the
hosted
service,
such
as
the
ux
DX
improvements
that
Brandon
went
over,
but
those
are
temporary
blockers
they're
in
our
sites,
and
it's
really
only
a
matter
of
time
before
all
blockers
in
that
realm
are
taken
down
and
all
volume
is
able
to
migrate
and
rely
on
the
network
for
the
really
just
the
best
uptime
lowest
latency
and
most
resilient
data
layer
in
web
3,
which
we're
consistently
seeing
is
by
far
the
most
important
factors
in
using
any
dated
technology.
E
B
No,
they
are
super
well
said,
you
know,
I,
think
to
Echo.
Like
you
know,
one
of
the
big
things
is
like.
We
also
offer
a
free
hosted
alternative
today,
right
and
like
that
is
going
to
go
away.
That's
something
we've
made
clear
to
you
know
the
developer
Community,
but
we
also
to
take
its
Point
want
to
make
sure
that
things
like
you
know,
developer,
churn.
B
Net
promoter
score
on
the
decentralized
network,
are
rock
solid,
and
so
that's
we're
really
letting
the
data
in
kind
of
the
first
you
know,
hand,
experience
and
feedback
we're
getting
from
developers
be
the
guide
on
when
the
right
timing
is,
but
that's
going
to
be
a
huge
you
know.
Obviously,
the
eventual
deprecation
of
the
hosted
service
is
going
to
be
a
huge
Catalyst
for
migration.
E
B
Yeah,
it's
a
great
question.
So
every
time
we've
gotten
this
question
and
we've
said
not
yet
right
because
there's
you
know
higher
priority
things
we
want
to
improve
about
the
network
and
as
soon
as
the
graph
Council
or
the
governance
starts,
touching
things
like
the
indexing
reward
rate,
like
you
sort
of
invite
a
political
process
to
start
playing
out
and,
frankly,
like
that's
not
you
know,
anyone
really
wants
the
decorative
ecosystem
or
researchers
to
be
focusing
on.
B
Certainly
you
know
if
the
research
you
know
longer
term
Research
indicates
that
that's
something
that
we
should
do
in
the
future.
That's
not
something
that
you
know
we're
closed
off
to
one
change.
That's
happened
recently.
That
was
a
little
bit.
Subtle
was
actually
the
way
the
indexing
was
computed
was
modified
when
we
supported
arbitrum
because
of
the
challenges
and
synchronizing
indexing
rewards
across
L1
and
L2.
So
a
switch
to
a
much
simpler
linear
rewards
calculation
that
actually
has
a
slight
Decay
built
into
it.
B
It's
like
very,
very
slow,
very,
very
slow
drift
that
the
council
can.
You
know
modify
the
parameters
to
keep
it
at
the
three
percent
Target,
but
that
already
sort
of
lays
the
groundwork
and
some
core
developers
have
asked
that
hey.
Maybe
do
we
just
allow
that
DK
to
play
out.
So
that's
a
conversation,
that's
happening,
that's
not
something!
That's
by
any
means,
you
know
final
or
you
know
decided
upon,
but
definitely
if
you
have
feedback
I
encourage
you
to
hop
into
the
forums
and
you
know
voice
your
opinion
on.
A
Yeah
and
I'll
also
just
add
that
the
token
economics
will
likely
evolve.
I
think
there's
a
lot
that
the
community
and
we've
learned
since
launching
the
the
initial
phase
of
the
token
economics,
and
so
as
Brandon
mentioned,
feel
free
to
voice
your
opinions.
You
can
always
propose
gips
and
yeah.
The
the
community
welcomes
your
feedback.
E
Yeah,
that's:
what's
what's
exciting
about
this,
everyone
can
participate,
everyone
should
participate.
So
it's
not
just
the
court
f
teams,
the
graph
Foundation,
it's
pretty
much
the
World
building
this
out,
which
is
exciting
Brandon
this
next
one's,
probably
for
you
as
well,
are
sub
streams,
something
everyone
will
use
and
therefore
will
sub
graphs
no
longer
be
a
standard
going
forward
in
the
future.
B
Yes,
that's
a
great
question
and
I
alluded
to
this
a
little
bit
earlier
in
the
talk
when
I
talked
about
substreams
enabled
subgraphs,
but
a
huge
part
of
the
multi-data
services
story
in
the
graph
is
all
about
composition
and
composition,
allows
you
to
use
the
right
tool
for
the
right
job
and
form
more
complex
and
sort
of
heterogeneous,
like
data
pipelines.
So
you
know,
as
we've
already
talked
about
I,
think
a
really
big
example
of
that
is
going
to
be
substream's
data
being
consumed
directly
into
subgraphs
to
be
queried
via
graphql.
B
The
way
that
application
developers
like
to
do
that
today,
there's
a
lot
of
benefits
to
that
pattern
for
one
subgraphs
are
a
lot
easier
to
reason
about
just
their
execution
model.
Their
query
model
then
substreams.
Even
if
you
take
rust
out
of
the
equation,
which
you
know
today,
you
have
to
write
substreams
and
rust
and
you
can
write
subgraphs
and
assembly
script.
B
The
framework
the
execution
model
of
substreams
is
just
inherently
more
complex,
and
so
it
may
not
be
the
case
that
every
developer
wants
to
write
substreams
to
take
advantage
of
the
performance
benefits
of
substreams,
but
the
beauty
is
that
they
don't
need
to.
Because
there's
you
know
great
developers
already
in
Missouri,
you
know
a
graph
core
developer
being
one
of
them.
That's
building.
B
You
know
these
catalogs
of
high
quality
sub
streams
and
then
Downstream
application
developers
will
be
able
to
pull
those
substream
data
sources
in
into
their
sub
graphs,
use
a
more
familiar
more
simple
development
experience
to
shape
the
data
into
whatever
you
know,
read
schema
they
need
to
support
their
uis
without
having
to
do
all
the
heavy
sort
of
like
back-end
data
engineering.
You
know
that
substance
sort
of
entails,
so
it's
kind
of
The
Best
of
Both
Worlds
I,
do
think
that
over
time,
subgraphs
will
start
to
be
a
little
bit.
B
A
Yeah
and
so
they'll
both
be
a
standard,
both
sub
graphs
and
sub
streams.
Maybe
some
sub
graphs
for
some
of
the
lighter
use
cases
and
sub
streams
for
some
of
the
larger
use
cases,
and
this
is
unblocking
you
know
some
of
the
gaming
applications
that
want
to
leverage
sub
streams
or
smart
contract
wallets.
So
a
lot
of
new
use
cases
will
be
enabled
with
substreams,
but
it's
not
one
or
the
other.
E
I'm
excited
to
see
sub
streams
built
out
and
I'm
sure
we'll
have
more
use
cases
in
the
future.
On
that
the
last
question
I'm,
seeing
maybe
for
you
Eva-
is
there
a
limit
to
how
many
chains
we'll
be
able
to
integrate
with
the
decentralized
network
through
the
Gip
integration
process
you
mentioned,
or
have
you
all
thought
through
that
and
what
does
that
look
like
in
the
near
Mid
long
term.
D
Yeah
great
question,
so
there
should
never
be
really
a
limit,
especially
as
the
network
scales.
It's
scaling
to
decide
to
support
that
many
users,
but
there
is
a
constraint
of
indexers
running
the
nodes
of
those
chains.
So
that
is
something
that
we're
already
starting
to
think
about.
How
do
we
best
integrate
with
that
ecosystem?
So,
for
example,
meeting
with
the
polygon
chain
Foundation
the
polygon
ecosystem?
D
You
know
really
sharing
and
educating
the
current
nodes
and
validators
of
those
ecosystems
about
becoming
an
indexer,
because
they're
essentially
already
doing
half
the
work
by
running
the
node.
They
would
just
then
have
to
run
graph
node.
So
really
that
would
be
the
limitation
so,
for
example,
as
these
gips
roll
out
it'll
be
interesting
to
see
the
feedback
from
the
indexer
community
themselves
on
sort
of
their
desire
to
run
those
chains
or
support
those
ecosystems.
B
One
thing
I'll
add
to
that:
oh
sorry,
one
thing
I'll
add
to
that
real
quickly.
Is
that
if
you
know
if
you're
a
chain
ecosystem
with
Foundation
or
a
client
development
team,
the
best
thing
you
can
do
today
to
guarantee
that
you
know
your
chain
can
be
added
to
the
graph
in
a
seamless
fashion,
is
to
integrate
fire
hose
and
to
natively
support
that
integration.
That's
something
that
tag
in
alluded
to
earlier
in
the
talk,
but
really
fire
hose.
A
Yeah
and
I
think
it's
important
to
note
that
the
myths
program
is
is
coming
to
an
end,
but
there's
still
incentives
for
inductors
to
jump
on
and
index
the
the
chains
going
live
in
the
future,
and
it's
really
just
about
activating
that
that
node
community
that
indexer
Community
to
Rally
behind
the
chain.
So
we
can
get
them
supported
on
the
network.
E
Yeah
and
on
that
we
have
one
more
question,
but
on
that,
if
you
have
connections
to
different
chain,
validator
ecosystems,
let's
work
to
introduce
and
cross-pollinate
in
for
provider,
so
graph
indexers
can
be
validators
increase.
Revenue
for
chains
and
validators
can
help
build
resilience
and
increase
capacity
for
chains
on
the
decentralized
network.
E
So
if
you
have
any
introductions
there,
please
Loop
them
into
to
the
team's
core
devs
across
the
ecosystem,
and
we
could
work
with
them
to
push
that
forward
and
the
last
question
that
just
came
in
how
are
substreams
going
to
be
indexed
and
is
there
any
idea
of
how
the
rewards
would
work
for
that?
Probably
for
Brandon
and
Eva
to
start.
B
Yeah,
so
I
can
go
first,
so
there
is
a
big
meeting
the
minds
on
this
in
Como
Italy
among
the
core
developers
and
thinking
through
how
can
sub
not
just
sub
streams,
but
fire
hose
key
value
stores,
and
you
know
a
variety
of
other
data
sources
that
you
might
think
of.
How
could
they
be
integrated
simply
and
effectively
into
the
decentralized
network?
There's
a
handful
of
proposals
that
are
kind
of
being
floated
around
the
core
Dev
ecosystem.
B
Right
now,
I'll
say
the
simplest
of
those
actually
doesn't
require
making
any
changes
to
the
on-chain
protocol
logic.
If
you
think
about
how
subgraphs
work
today
with
the
indexing
of
the
query
Market,
the
indexing
Market
is
already
designed
for
basically
paying
for
a
continuous
service
right.
The
the
process
of
syncing
a
subgraph
is
an
ongoing
service
similar
to
the
process
of
processing
the
substream
on
an
ongoing
basis
and
the
query
Market
that
the
graph
has
is
already
designed
for
payments
for
a
request
response
type
service
right.
B
So
if
you
send
a
graphql
query
in
the
graph
you
pay
for
a
single,
you
know
pay
by
the
query
and
the
beauty
is
the
on-chain
economics
and
the
on-same
smart
contract
logic
is
agnostic
to
the
type
of
service
or
the
type
of
request
response.
B
On
the
you
know,
the
same
side
and
the
protocol
economics
support
that
it's
really
just
sort
of
off-chain
services
and
handshaking
and
negotiation
between
the
data
consumer
and
the
service
provider
that
need
to
be
specified,
but
that's
really
more
about
specifying
interfaces
than
it
is
about
fundamentally
and
redesigning
or
rethinking
the
way
the
protocol
works.
So
we
see
a
pretty
pretty
clear
path
to
getting
these
data
services
supported
in
the
network.