►
From YouTube: The Graph - Core Devs Meeting #11
Description
The Graph’s Core Devs Meeting #11
This video was recorded: Thursday, March 17 @ 8am PST, 2021.
The Graph's Media:
Twitter: https://twitter.com/graphprotocol?s=20
Instagram: https://instagram.com/graphprotocol
LinkedIn: https://www.linkedin.com/company/theg...
Website: https://thegraph.com
A
A
If
you're
new
here
some
contacts
we're
having
these
calls
with
core
contributors
every
month,
they're
always
recorded
and
uploaded
to
youtube.
So
I
do
invite
you
to
check
our
channel
if
you've
missed
the
last
ones
or
wanted
some
sort
of
some
recap
I'll,
also
paste
in
the
chat
and
show
notes
a
link
to
the
graph's
ecosystem
calendar.
So
you
can
be
subscribed
to
future
events
like
these
ones
and
others
such
as
such
as
community
talks
right.
Okay.
A
So
with
that
out
of
the
way,
a
quick
tentative
agenda
for
today,
we'll
start
off,
take
talking
about
how
we
are
with
blockchain
integrations
then
related
to
the
above.
We
can
talk
about
ethereum
fire
hose.
I
know.
Edgy
node
has
been
working
some
testing
with
streaming
first,
so
we
can
talk
about
this
along
these
lines.
There's
also
new
things
that
have
been
posted
on
the
forum
about
bringing
multi-chain
support
to
the
decentralized
network.
So
that's
a
big
one
for
sure.
There's
some
interesting
things
over
there.
A
I
think
adam
can
speak
about
this
also
with
ariel
and
finally
we'll
end
up
with
something
that
might
be
of
interest
to
indexers
as
well.
That
is
semiotics
semiotic
can
can
walk
us
through
their
recent
efforts
on
building
an
automated
cost
modeling
framework
that
will
be
leveraged
by
agora.
A
So
let
me
just
accept
people
that
are
still
joining
the
call.
Okay,
so
yeah,
let's
get
into
the
good
stuff,
it's
been
a
while,
since
we've
talked
about
multi-chain
so
I'll
hand
it
off
to
alex
adam
joseph.
Maybe
we
can
talk
about
solana
and
tendermint
how
things
are
along?
Could
you
guys
speak
about
this
for
a
little
bit.
B
Well,
yes,
work
on
solana
is
continuing
we're
so
we've
had
the
node
running
and
producing
you
know
fire
hose
data
for
one
month,
which
is
really
good.
At
some
point.
The
nodes
were
less
stable,
with
the
we
had
a
harder
time
iterating
on
the
the
boot
process
like
booting,
these
nodes
often
takes
half
an
hour.
So
we're
trying
to
improve
that,
and
at
some
point
we
decided,
let's
let
it
run
and
see
if
it's
stable.
So
you
know
the
thing
is
stable
for
a
month.
B
We
need
to
revise
now
the
data
model
to
make
sure
the
data
model
is
really
robust
and
and
yeah
things
we've
discovered,
even
in
eighth
land
that
we
need
to
have
total
ordering
within.
You
know
how,
in
a
block
in
ethereum,
you
have
these
logs,
they
have
an
ordering
that
is
complete
from
the
beginning
of
the
block
at
the
end,
so
you
can
have
them
ordered
and
know
what
happened,
but
the
goal
with
the
fire
hose,
that
is
to
have
an
extremely
complete
total
ordering
of
events.
B
One
in
this
solana
there's
a
beginning
of
instructions
and
end
of
instructions
and
changes
to
the
state.
We
want
to
have
sort
of
one
counter
to
allow
us
to
order
everything
which
you
know
gives
us
a
clear
picture
of
what
what
happened
during
block
execution
so
we'll
need
to
revise
the
data
model.
For
that,
and
I
mean
then
we'll
need
to
make
sure
some
other
people
are
ready
to
run
can
consume.
The
you
know,
produce
those
fire
hose
block
and
start
serving
fire
hose
a
lot
of
the
work.
B
For
you
know,
even
integration
within
the
graph
note
has
been
done
already
we're
just
we
need
to
have
the
node
running
so,
okay,
that's
my
update
on
solana.
There's
questions.
Of
course.
C
In
terms
of
requirements,
because,
obviously
the
requirements
to
run
a
salon,
node
are
pretty
pretty
meaty
like
running
a
fire
hose
on
top
of
that
like
how?
How
is
that
looking
is
it?
Is
it
something
where
you
just
need
enormous
amounts
of
sort
of
infrastructure
or
or
have
you
got
a
sense
of
how?
How
difficult
that's
going
to
be.
B
Right
so
normally
you
know
any
node
that
is
running,
mastered
or
master
replication.
Is
writing
so
in
solana
we're
running
a
validator
node,
which
is
applying
all
transactions
and
all
that
yeah?
I
have
notifications
on,
but
I
can't
I
can't
mute
them
so
we'll
see
so
yeah.
It
is
running
but
oftentimes.
These
nodes
are
already
handling
some
sort
of
indexing,
either
they're
indexing
in
memory
so
that
they
can
serve
our
pc
calls
or
the
solana
node
has
a
thing
to
write
to
bigtable.
B
So
it's
writing
things
to
a
database
and
there's
a
new
account
framework
db
framework,
but
also
writes
some
other
postgres
database
or
whatever,
and
these
things
take
a
hit
on
the
node.
But
thankfully,
I
think
with
the
firehose
we
can
disable
everything
we
don't
send
anything
to
a
remote
server
like
you
know,
bigtable.
B
Nor
do
we
need
to
index
anything
in
memory
because
it's
not
the
purpose
of
serving
rpc
queries,
so
we
can
shut
down
all
the
things
that
are
indexing
locally
and
just
spew
out
the
data.
So
there's
not
a
lot
of
larger
overhead
than
these
other
systems
when
the
goal
is
just
to
spew
it
out
and
not
even
have
the
network
round
trip
to
a
big
table
so
there's
a
small
hit
right,
there's
a!
I
don't
know
about
the
number
like
ten
to
twenty
percent
five
to
fifteen,
I'm
not
sure
exactly.
B
We
need
to
do
some
benchmarking
optimizations
for
that.
But
but
you
know
it's
not
so
bad.
I
think
it
could
work
and
it's
done
in
parallel
too
there's
a
lot
of
work
that
is
done
in
each
of
these
threads,
so
it's
scaled
with
a
number
of
cpus
too.
So
I
think
it's
not
so
bad.
Does
that
answer
your
question.
C
Yeah
yeah,
I
guess
that's
interesting
and
that
so
you're
not
running
a
full.
Well
like
a
a
salon
note
as
you
might
might
otherwise.
I
don't
know
if
you
guys
have
sort
of,
because
obviously
one
when
indexing
ethereum
you
make
like
eth
calls
like
during
indexing
like
if
you
thought
about
whether
that
would
be
something
that
I
guess
do
still
on
and
like
I
actually
don't
know
for
salon
I'd
support.
Those
archival
calls
in
the
same
way
that,
like
ethereum,
supports
archival
calls
but
yeah
it
look.
B
A
lot
of
those
do
not
do
anything
about
archive,
in
the
same
sense
that
you
can't
query
past
stuff.
That
data
is
just
far
long
gone
when,
when
you
query
the
note,
it's
really
real
time
only
stuff
like
that.
C
C
I
I
was
just
saying:
that's-
that's
potentially
quite
a
constraint
to
get
around.
If,
although
I
guess,
if
everything's
past,
these
events
like
like,
I
think
the
model's
just
different-
probably
just
thinking
about
it
from
an
ethereum
perspective,.
B
So
in
solana,
you'll
not
need
there's.
No
there's
no
read
calls
you
don't
call
the
node
to
do
stuff
in
the
past
or
in
the
present
you
get
the
data
out
and
you're
really
querying
the
state
which
you're
not
really
doing
in
solar
and
ethereum
ethereum.
You
call
a
contract
that
interprets
the
data
for
you
in
solana.
You
query
the
data.
B
You
interpret
it
yourself,
so
once
it's
outputted
in
fire
hose,
you
don't
need
to
go
back
like
so
it's
one
of
these
chains
that
when
all
the
data
is
in
a
fire
hose
dump-
and
you
have
that
in
flat
files,
then
it's
glorious
because
you
have
everything
you'll
ever
want.
The
data
is
all
there
and
there's
nothing
that
the
solana
node
would
help
you
do
more.
C
Got
it,
and
I
guess,
because
the
firehouse
is
designed
more
for
like
streaming
from
top
to
bottom
in
terms
of
like
interrogating
that
state,
obviously
it's
in
there
but
like
if
you
get.
If
you
understand
my
question
like
in
terms
of
like,
like
something
that
looks
a
bit
like
querying
and
and
how
you
support
that,
when
actually
the
main
sort
of
fire
is
interface,
is
a
streaming
interface.
B
B
You
can
match
the
beginning
of
the
file,
some
values
and
so
we'll
be
able
to
do
some
querying
of
that
sort
and
look
similar
to
what
we
have
right
now
in
ethereum
for
address,
for
example,
you'll,
be
to
say,
you
know,
give
me
all
the
changes
for
program
accounts
that
are
owned
by
this
program,
something
like
that
and
we'll
be
able
to
shrink.
It
use
the
same
powerful
stuff
that
stefan
wrote
for
ethereum.
B
You
know
for
indexing,
but
I
really
think
that
you
know
the
streaming
architecture.
The
stuff
substream
code
name
thing
will
be
the
best
way
to
consume
that,
because
we'll
be
able
to
reduce
it
and
there's
a
lot
of
data
coming
through
to
reduce
it
to
some
meaningful
bits
and
bytes
to
consume
down
downstream.
C
A
A
I
think
alex
just
brought
up
a
a
topic
we
can
talk
about
lately
later
on,
but
yeah
joseph
you
want
to
talk
about
tender
meat.
How
things
are.
D
Yeah
sure
sure,
so
I
I
guess
earlier
this
week
on
monday
or
tuesday,
we
merged
all
of
our
attendant
prs.
That's
great.
It
was
pre-reviewed
by
gadget,
node
and
streaming
fest,
and
now
it's
emerge.
That's
that's
great
great
news
and
just
a
little
bit
of
explanation
about
what
that
means
is
that
in
the
cosmos
ecosystem
they
call
it.
D
We
had
some
feedbacks
and
something
interesting
about
the
vision
of
how
we
could
move
forward
with
that
is
that
all
these
different
chains
in
the
cosmos
ecosystem,
they
are
connected
through
ibc
and
they
can
have
their
tokens
transferred
and
their
data,
and
so
it's
like
really
an
ecosystem
of
different
applications,
and
so
the
vision
is
that
when,
when
the
graph
could
start
indexing
these
individual
individual
chain,
we
could
become
some
kind
of
this
kind
of
infrastructure.
Where
applications
that
need
data
from
two
or
more
multiple
chains.
D
They
could
be
using
the
graph
to
collect
all
these
data
and
to
achieve
this,
we
we
had
like
great
idea
with
with
some
customers
we're
gonna
try
to
have
some
kind
of,
like
standardized
schema
across
these
different
networks
in
the
cosmos
ecosystem.
So
it
could
be
really
easy
for
people
that
are
using
two
different
subgraphs
for
do
different
chains
to
make
something
like
this
almost
the
same
call
to
get
the
same
information.
So
let's
say
someone
wants
delegated
rewards
on
cosmos
hub
and
they
will
they'll
get
the
rewards
on
terror.
D
If
you
could
provide
this
same
standard
schema
across
these
chains,
it
could
be
easier
for
them
to
parry
it
across
chains,
and
so
this
is
something
that
we
are
kind
of
focusing
on
and
next
we're
going
to
be
working
on
cosmos
hub.
We
are
pretty
close
to
it
and
next
after
it
is
going
to
be
that
era
chain
and
the
custom
secret
system.
A
So
I
guess
at
some
point
we'll
have
also
once
we're
ready
to
support
these
on
the
network.
We
might
have
index
exhaust
move
using
specific
fire
fire
hoses
to
those
chains
right,
so
maybe
we
can
start
off.
I
know
there's
some.
He
has
mentioned
before
we're
working
on
ethereum.
A
I
know
adam
we've
been
testing
some
things
with
streaming
fast.
So
can
we
also
maybe
share
some
updates
there?
I
know
indexes
are
trying
to
run
fire
fire
fireballs
for
the
ethereum,
or
at
least
I've
heard
from
from
a
few.
C
So
I
actually
had
a
question
just
on
the
on
the
standardized
schemas
thing.
First,
that's
okay,
sure,
absolutely
because
I
think
this
is.
C
C
Yeah,
so
I
think
this
is
something
really
interesting.
We
talked
about
this
the
other
day
around,
like
standardizing,
schemas
and,
like
I
think,
could
be
maybe
even
broader
than
like
just
within.
I
guess
the
tenement
ecosystem,
because
I
don't
know
like,
is
there
a
reason
why
you
can
have
a
standardized
schema
like
across
sub
graphs
in
general,
like
regardless,
whether
they're
sort
of
ethereum
or
like
tenement
or
like?
Is
there
something
about
the
tenement
ones
in
particular?
That
you
think
is
valuable
or
is
it
just
because
that's
a
community
already.
D
So
that's
a
great
point
like
yeah.
Of
course,
we
could
also
be
considering
how
we
could
have,
like
cetera,
schema
across
all
different
networks,
even
between
cosmos
and
ethereum,
and
we
we
like
for
like
when
we
got
the
idea
and
we
got
the
feedbacks
from
some
users.
We
said
that
we
could
start
testing
it
and
the
cosmos
system
and
see
how
it
goes
and
yeah
sometimes
like
in
the
cosmos
system.
D
Some
chains
have
the
same
kind
of
let's
say,
specific
goals
that
we
can
make
that
we
could
assure
that
certain,
like
percentage
of
the
schema,
could
be
the
same.
But
since
every
blockchain
is
for
an
application
that
is
specific
to
this
blockchain,
they
could
have
some
different,
like
the
schema
would
have
something
additional
like
ontario
than
something
on
cosmos
hub.
So
they're
going
to
be
some
kind
of
fields
that
is
going
to
be
additional
to
one
chain
versus
the
other.
But
we
were
thinking
about
all
that.
D
B
I
think,
because
those
chains
all
have
the
same
consensus
engine
the
notion,
for
example,
of
a
block
it's
the
same
and
there's
a
spot
there.
We
have
chain
specific
transactions
and
data
and
in
one
chain
that
might
be
by
nft
cell
nft
and
another
chain
that
will
be.
You
know,
transfer
and
withdraw
what
something
like
that.
But
there
are
a
bunch
of
data
structures
that
are
similar
they're
the
same
because
of
their
commonality
with
tendermint,
and
so
the
reward
system
and
all
these
things
comes
at
that
base
layer.
B
That's
why
we
want
to
have
a
common
schema
that
is
tender,
mint,
esque
and
there's
a
spot
for
extension,
where
you'll
have
those
you
know,
typed
and
richly
typed
types,
but
that
is
chain
specific,
so
you
have
by
fd
here
and
they'll
be
in
that
same
field.
So
we
need
it's.
The
first
time
we
have
such
a
let's
say
a
pattern
in
the
protobuf
definitions,
where
we
will
have
a
large
reuse
of
the
base
and
some
custom
specific
chain,
specific
things
down
low
right
in
the
transaction
spot.
D
Yeah
and
if
I
may
add
something
as
well,
is
that
in
the
future,
if
we're
looking
at
subreddit
composability,
where
one
subgraph
could
query
two
subgraphs,
the
idea
was
also
like.
We
were
talking
with
massari
and
massari.
They
are
interested
in
building
their
own
subgraph
for
cosmos
hub,
but
they
are
as
well
interested
in
collecting
data
from
different
shades.
So
let's
say
delegation
words
could
be
on
cosmos
hub
on
terror.
So
if
it
is
the
same
call
they
could
create,
like
I
don't
know
like
in
the
future,
like
just
in
the
short
term.
D
It's
the
same
call
to
get
delegation
rewards
from
terror
hub
and
from
from
cosmos
up.
But
let's
say
we
integrated
like
more
chains.
Let's
say
we
integrated
now
osmosis
and
stargaze.
So
stargaze
is
like
an
nft
marketplace
and
osmosis
is
the
dex.
So
if
they
want
to
query
the
price
of
a
swept
from
let's
say,
osmo
to
atom,
it's
not
going
to
be
anywhere
on
on
our
surges.
C
Nice
yeah,
I
think
I
always
think
about
the
standards
you
see
with
the
token
standards
and
ethereum,
and
we,
I
think,
we've
said
this
for
a
while,
but
we
need
to
have
those
kind
of
things
and
they
could
go
through
like
a
gop
process
and
dave
who
used
to
work
for
edunote,
actually
posted
something
kind
of
along
those
lines
for
lending.
So
I
think
definitely
something
we
want
to
foster.
C
Well,
I
guess
yeah
so
then
to
get
back
onto
the
the
firehose.
The
fires
at
like
ethereum,
five
stuff
yeah
so
like
like
pedro,
said,
we've
been
working
on
this
for
a
while
and
we've
been
updating
this
group
and
in
our
weekly
meetings
like
on
it
for
a
while.
I
think
the
let
me
just
pull
this
thing
up
and
I
might
put
some
people
on
the
spot.
C
That's
okay,
but
I
guess
here's!
Here's,
a
here's,
a
fire
hose!
I
love
this
picture.
I
don't
know
I
feel
like
you
should
be
like
like
being
pushed
backwards,
but
just
talking
about
the
fire
hose
just
wanted
to
talk
about
a
couple
of
initiatives
going
on
so
the
first
one
is
sort
of
the
ethereum
like
the
theater
and
fire
stuff.
C
This
has
been
like
has
gone
through
a
lot
of
testing,
but
the
point
I
think
we're
kind
of
at
is
that
obviously
believe
that
the
files
is
is
we
know
the
file
is
great
for
individual
subgroup
indexing
but
got
some
things
around
like.
Firstly,
like
finalizing
our
sort
of
poi
confidence,
so
running
lots
of
subgraphs,
making
sure
that
the
proofs
of
indexing
are
like
consistent
if
you're
indexing,
with
an
rpc
over
or
with
a
firehose
and
a
lot
of
kudos
to
matt
from
streaming.
C
First,
who's
done
a
lot
of
the
sort
of
thinking
around
like
what,
if
you
like,
start
with
a
fios
and
then
go
to
a
an
rpc,
and
then
you
go
back
again
obviously
like
like.
There
are
a
lot
of
edge
cases
there,
particularly
because
the
obviously
like
works
in
one
way,
keeping
track
of
the
chain
head
and
the
fives
like
has
another
way.
But
obviously
the
preferred
course
is
that
you
can
kind
of
mix
and
match
between
them
and
there's.
No.
There
are
no
problems.
C
If
there
are
those
kind
of
problems,
then
we'll
maybe
we'll
need
to
think
about
introducing
files
indexing
in
a
different
way,
which
might
be
sort
of
more
version
controlled.
Obviously,
the
preferred
cases
that
we
can
just
sort
of
blanket
switch
over
and
we're
doing
some
some
testing
at
the
moment.
I
think
one
thing
I
wanted
to
highlight,
and
I
was
covered
there
it
and
like
this
people
may
know
this,
but
to
me
like
the
firehose,
isn't
just
about
like
speed
and
faster
indexing.
I
think
there's
a
really
cool
space
for.
C
Firstly,
one
thing
here
is
more
more
data
being
accessible
in
the
mappings
when
you're
indexing,
ethereum
you're
kind
of
constrained
by
what
you
can
get
from
the
rpc,
whereas
like
the
firehose
like
block,
can
pass
down
like
a
lot
more
contextual
data,
a
lot
more
performantly.
So
that's
everything
from
getting
getting
receipts
to
like
other
things
that
are
happening
within
a
given
transaction
stuff,
which
we
currently
have
to
rely
on
like
trace
filtering,
which
isn't
supported
by
all
ethereum
endpoints.
C
I
just
wanted
to
like
like
highlight
this
thing:
it's
not
it's
not
just
about
speed.
It's
also.
My
functionality
like,
I
think
the
one
thing
alex
which
we've
talked
about-
and
maybe
you
can
describe-
is
this
thing
of
maybe
triggering
not
just
on
individual
events
but
on
a
sort
of
constellation
of
specific
events.
Yeah.
B
You
want
me
to
give
a
talent
on
that,
because
that
yeah
I
found
it
fascinating.
I
was
studying
you
know
we
were
doing
a
lot
of
the
pancake
spot
stuff
so
unit
swap
basically-
and
I
noticed
some
usage
patterns
that
were
really
really
that
made
the
code
so
crazily
hard
to
read,
and
the
code
goes
like
this.
You
know
one
event.
A
mint
event,
for
example,
can
have
will
have
four
logs.
B
So
if
someone
calls
mint
to
mint
some
stuff
in
unit
swap,
there's
gonna
be
four
logs
transfer
transfer
and
then
a
sync
event
and
then
mint
event.
It's
really
unfortunate
that
the
mint
event
is
last
because
now
you
need
you'll
receive
this
one
first
and
because
you
don't
have
the
context
of
the
other
things.
You'll
need
to
decide
what
to
do
with
transfer.
So
what
the
code
does
now.
It
stores
it
in
a
temporary
location
called
transaction.
B
It's
going
to
wait
for
the
next
one
and
then
going
to
unpack
what
was
in
there
already
and
then
try
to
to
to
to
say:
oh
maybe
this
one,
these
are
these
heuristics.
Maybe
this
one
means
a
burn
or
a
mint.
I'm
not
sure
so,
I'm
going
to
mark
it
as
needs
to
be
complete,
and
then
I'm
going
to
have
a
sink,
I'm
just
going
to
change
the
price
and
then
I'll
get
the
mint,
and
now
I
can
sort
of
back
interpret.
What's
behind.
B
B
But
you
know
if
we
have
yeah,
I
mean
you
should
look
at
the
code
there.
It's
terrible,
but
there's
a
pattern
here,
like
you
can
imagine
identifying
patterns
like
transfer,
transfer.
Think
mint.
That's
a
pattern,
bring
me
the
four
together
and
the
thing
is:
there's
important
things
in
that
transfer
and
that
mint
for
the
subgraph
to
work
properly.
B
We
need
those
two
things
but
because
you
have
them
in
like
in
a
different
order,
without
the
context
makes
it
really
hard
and
the
code
is
so
horrible
and
but
if
we
could
pattern
match-
and
sometimes
oh
that's
makes
it
to
make
things
worse.
This
one
is
optional.
Like
there's
actually,
potentially
five
transfers,
two
are
optional,
you're,
not
sure
which
one
they
are
so
sometimes
you'll
have
just
transfer
sync
and
mint,
but
that's
another
pattern
you
can
match
right.
You
could
say
three
four
or
five
and
those
two.
Then
you
have
them
together.
B
You
can
take
better
decisions,
knowing
that
it's
a
mint
series
or
a
burn
series,
which
are
this
burn,
is
similar.
Just
transfer
transfer
sync
burn
just
to
make
it
more
complex,
so
yeah
having
more
data
available
additional
triggers
like
we
can
see
that
with
the
fire
hose,
can
bring
you
that
information
all
together,
even
though
the
node
will
not
give
you
when
you
get
logs
contexts
with
these
things,
because
we
have
scoped
logs
within
one
single
call,
we
can
know
that
these
pattern
match
in
one
single
evm
call,
and
we
can
pattern
match
on
that.
C
Yeah
yeah
yeah,
so
yeah
like,
I
think,
there's
a
lot
of
cool
stuff
and
when
you
say
the
yeah,
the
code
is
horrible,
like
I
think.
C
Obviously,
sub
graphs
can
be
optimized,
but
this
is
also
like
a
constraint
of
the
current
model
where
you
just
get
event
by
event
by
event,
and
so
you've
got
to
create
these
like
tiny
temporary
states,
which
you
then
throw
away-
and
I
think
david
I
don't
know
he's
on
the
call
like
has
also
been
seeing
some
similar
wrangling
in
other
subgraphs,
so
like
I'd,
say
that
this
is
this.
Definitely
an
area
we're
having
a
different
way
of
getting
data
rather
than
just
getting
one
log
at
a
time.
C
I'm
I'm
pretty
excited
by
the
next
thing.
I
think
then,
just
on
this
was
this
sort
of
filter
coffee.
I
don't
know
if
we've
talked
about
this,
but
if,
like
stefan,
is
on
on
on
the
call,
but
this
is
the
essentially
when
we're
doing
a
lot
of
the
testing
with
the
fires,
we
saw
that
the
fires
and
had,
I
think
it
cut
its
teeth
on
really
dense
subgraphs,
so
where
there
are
lots
and
lots
of
events
in
almost
every
block.
C
But
then,
when
you
like,
lined
it
up
against
the
subgraph,
which
is
a
bit
more
sparse.
So,
like
maybe
going
for
many
blocks,
basically
smaller
apps,
then
actually
fires,
the
fires
wouldn't
be
as
performant,
and
so
I
think
this
has
been
a
piece
of
work
which
I
think
is
generalizable
that
the
stream
fast
guys
have
done,
which
is
kind
of
filtering
down
and
essentially
allowing
the
graph
node
to
ask
for
a
specific
set
of
contracts
or
events.
E
Sorry,
you
want
me
to
explain
which
part
I
I
I
do
some
noise
noise
here.
E
Yeah,
basically,
we
try
to
to
model
a
type
of
index
that
would
that
would
allow
matching
what
is
defined
in
the
subgraphs.
E
So,
basically,
you
know
what
address
what
what
type
of
method
signatures
and
all
of
this,
so
we
we
allowed
all
we
started
by
allowing
in
the
request
to
the
fire
hose
in
like
an
array
of
protobuf
any
types
so
that
it's
flexible,
we
can
add
any
type
of
request,
and
these
we
label
them
under
the
thing
called
transforms,
so
transform
can
define
how
I
want
my
block
transformed,
for
instance,
it
can
filter
out.
E
If
I
say
I
just
want
any
call
from
this
address
or
to
this
address
for
a
method
called
transfer
or
something
like
this,
it
can
shrink
the
payload
of
the
block
by
skipping
all
the
transaction,
traces
that
are
not
relevant
to
my
filter
and
if
the
firehose
can
see
that
in
the
next
blocks
that
in
the
next
200
blocks
or
a
thousand
blocks,
there's
nothing
that
matches
anything.
It
will
simply
jump
to
that
next
block
because
we
have
a
we
send
a
cursor
with
each
response.
E
You
may
eventually
get
some
blocks
that
are
irrelevant.
That
I
have
no
content,
but
it
still
allows
you
to
progress
on
your
subgraph.
E
This
has
been
implemented
on
the
fire
hose
for
ethereum
chain
and
also
for
near
so
you
can.
You
can
ask
for
near
block
content
by
account,
and
you
can
also
add
this
filter,
so
you
might
receive
just
a
block,
10
and
then
15
and
then
2
000.
It
really
reduces
the
the
the
the
payload,
but
mostly
the
bandwidth,
for
streaming
empty
blocks.
When
you
get
close
to
a
live
range
ahead,
the
the
head,
you
will
receive
every
block,
but
they,
if
you
have
the
filter,
they
will
be
empty
because
there's
no.
E
C
Cool
yeah
yeah
also
went
like,
and
I
think
that
the
fact
that
support
in
there
is
particularly
salient
right
now,
I
think
it's
not
so.
I
think
philippe
has
been
doing
some
work
on
the
graph
node
side
to
integrate
the
filters
and
it's
not
set
up
for
now,
but
we've
actually
been
having
some
degraded
performance
on
there
this
week.
So
can't
wait
for
that,
because
that'll
make
it
go
a
lot
faster.
B
I'd
like
to
just
when
the
feature
that
he's
talking
about
there
allowed
us
to
shrink
performance,
you
know
normally
it
needed
to
go
through
the
chain
and
open
all
the
files
that
could
take
several
minutes
and
now
for
some
large
swath
of
the
chain
because
of
that
sort
of
indexing
stuff
that
are
all
files
based
by
the
way
they
we
can
do
that
in
six
seconds
five
seconds,
it's
much
much
faster,
because
you're,
not
even
opening
files,
that
index
tells
you
don't
open
the
next
million
blocks,
because
you
know
beforehand,
nothing
is
in
there,
so
the
performance
like
crazy.
B
I
didn't
even
think
we
could
do
something
like
that,
be
being
files
based,
but
with
some
indexing
stuff
and
we're
using
roaring
bit
map
technology
in
there,
some
of
the
things
that
they
put
low
level
and
database
systems.
We're
able
to
do
crazy
of
you
know,
avoidance
of
loading
data.
So
I
think
it's.
E
B
E
Because
because
all
the
data,
the
historical
data
data
doesn't
change,
we
don't
need
any
kind
of
stateful
index,
so
we
don't
need
you
know
to
have
a
running
redis
or
a
running
sql
instance
to
update
the
indexes
and
remove
stuff,
and
so
we
can
have
those
fixed
length
files.
So
you
just
the
same,
will
just
look
up
for
a
file
called
0
to
10
million
and
in
this
file
it
can
look
up
a
hash
of
of
an
address
or
a
type
of
query
and
their
roaring
bitmap
will
tell
it
really
quickly.
E
Well,
there's
some
data
around
block
five
million
and
then
that's
it.
So
we
can
skip
all
of
this
with
a
completely
stateless
system,
just
based
on
the
high
throughput
high
parallelizable
distributed
file
system.
So
we're
cutting
costs
of
like
operating
in
ram
indexes
by
by
having
those
really
simple
block
boundary
based
in
the
indices
on
files.
A
Okay,
I
feel
like
we
could
spend
hours
always
talking
about
five
variable
stuff
right.
It
opens
the
door
to
a
bunch
of
different
things
and
but
yeah,
let's
in
the
interest
of
time.
Let's
try
to
follow
along
with
the
original
agenda,
maybe
can
segue
into
the
recent
efforts
towards
bringing
multi-chain
support
to
the
network.
I
don't
think
this
has
been
shared
already
within
different
working
groups,
but
we
do
have
adam
and
I
believe
ariel
also
on
the
call.
A
So
could
we
probably
talk
about
a
little
bit
about
the
recent
efforts
on
the
this
recent
work
on
the
epoch
block
oracle
proposal?
I
believe
it
also
leverages
zack's
data
edge
gip
that
that
one
was
posted
on
the
forum.
I
could
probably
link
it
here
in
the
in
the
shot,
but
yeah
I
know
adam.
Do
you
want
to
speak
about
this
and
how
we
can
get
multi-chain
indexing
rewards,
essentially
how
this
will
take
us
there.
Basically.
C
For
sure
yeah,
I
feel,
like
I've
been
talking
too
much
on
this
on
this
so
far,
but
hopefully
so,
hopefully
it's
going
yeah.
C
A
C
Going,
let's
keep
I've
got
so
yeah,
slides
and
all
oh,
not
sure,
slideshow,
so
pretty
high
like
high
level
stuff
and
yeah.
This
has
been
a
really
fun
thing
to
work
on
actually
with
zach
and
arielle.
C
I
think
particularly
like
kicked
into
another
gear
when
ariel-
and
I
were
together
in
denver
a
couple
of
weeks
back
so
like
broadly
and
move
you
around
like
the
problem
is
that
subgraphs
are
multi-network
like
we've
got
it
uses
like
on
on
so
many
evm
chains,
and
so
and
now
some
non-evm
chains
that
like
are
building
subgraphs
and
it's
a
huge
part
of
our
growth
over
the
last
year.
But
the
graph
network
is
not
is
not
yet
multi-network
either
and
we
obviously
know
that's
part
of
the
longer-term
vision.
C
So
like
the
broad
problem-
and
I
have
to
move
this
around
again-
is-
is
around
allocation
closure.
The
essentially
like
with
ethereum
main
nets
of
graphs.
There's
a
really
simple
rule.
You
have
to
close
an
allocation
of
this
index
for
every
28,
28,
epochs
and,
and
you
do
that
with
essentially
the
first
block
of
a
given
epoch.
C
So
that,
then,
like
ensures
that
you
are
actually
indexing
with
a
certain
amount
of
recency
and
allows
other
people
like
like
your
cruise
windows,
going
to
be
cross-checked
across
across
the
network,
and
the
challenge,
when
you
have
other
chains
is
that
different
chains
have
different
block
speeds
they
have
like
boxes,
might
be
consistent,
but
much
faster
than
ethereum.
C
They
might
be
inconsistent
variables
actually
like
lining
up
like
what
what
blocks
allocation
should
be
closed
with
like
across
changes
is
not
like
immediately
obvious,
and
so
there
are
a
bunch
of
options
that
you
could
kind
of
leverage
to
to
try
and
solve
this
problem.
So
oracle
like
arbitrated,
cross-checking,
a
sync
chain
idea.
C
We've
talked
about
the
gossip
network,
a
lot
state
bridging
there
are
bridges
across
some
of
these
change
chains
and
then
also
having
like
actually
more
sort
of
graph
deployment
stuff
on
on
the
chains,
that
being
indexed
not
just
the
protocol
chain,
which
is
like
ethereum
main
net.
A
bunch
of
considerations
like
here
like.
What's
the
amount
of
upfront
work?
How
much
does
it
cost
to
add
a
new
chain
like
what
are
the
decentralization
implications
of
what
we're
doing?
C
Like
operational
costs,
indexer
costs
as
well,
because
it's
not
just
about
like
anything,
that's
being
run,
to
enable
this
also
legibility
and
like
how
easy
it
is
to
understand,
what's
going
on
and
then
data
freshness,
which
is
a
a
slightly
like
separate
one.
C
But
it's
this
problem
of
essentially
subgraphs
lagging
the
chain
head,
and
we
want
to
incentivize
indexes
to
be
closing
allocations
with
like
what,
with
the
most
recent
data
indexed
and
one
way
to
do,
that
would
be
to
kind
of
close
allocations
with
even
more
recent
blocks.
So
these
are
a
bunch
of
things
that
that
sort
of
go
into
the
decision
process.
In
terms
of
the
proposal
we
get
proposing
oracle
adelphi,
I
think
at
least
google
images
said
it
was
so
essentially
an
oracle
approach.
C
This,
in
a
way,
is
like,
like
a
slightly
prague
like
pragmatic
decision
in
terms
of
time
to
market
in
terms
of
simplicity,
but
also
in
the
knowledge
that
actually
a
lot
of
the
research
that's
being
done
around
l2.
I
might
actually
change
the
dynamics
or
location
of
the
network,
so
doing
anything
like
across
chain,
which
was
too
tight
to
like
specific
bridges,
might
be
like
longer
term,
throw
throw
away,
work
and
yeah.
C
Obviously
this
is
that's
really
super
high
priority
thing
for
the
network,
so
this
is
what
we're
kind
of
going
with
in
terms
of
the
design.
This
is
actually
the
current.
That's
currently
designed
smart
contract.
So,
as
you
can
see
it's,
this
is
it.
I
couldn't
even
tell
you
how
many
lines
that
is
without
comments.
It's
it's.
I
think
four
five
and
so
like.
I
guess
the
question
is
sort
of
what's
what's
going
on,
and
this
is,
I
guess,
alluding
to
the
post,
which
zach
gop.
C
I
think
it's
25
in
the
forum,
which
is
this
idea
of
data
edges,
so
I
will
not
explain
it
as
eloquently
as
him,
but
essentially
this
is
this.
C
The
fact
that
actually
storing
data
and
doing
processing
and
calculations
on
ethereum
is
very
expensive,
but
essentially
storing
cool
data
on
ethereum
is
relatively
much
cheaper
and
it
gives
you
the
same
sort
of
security
guarantees,
and
so
the
idea
here
is
that
if
you
can
actually
just
pass
the
call
like
call
data
to
this
fallback
function,
which
actually
could
be
like
called
with
any
selector,
then
essentially
you
get
that
data
onto
the
ethereum
and
like,
if
only
there's,
a
way
to
then
process
that
data.
C
That's
that's
what
sub
graphs
do
so
graphs
can
detect,
detect
those
calls
they
can
process
like
data.
That's
been
encoded
in
a
really
efficient
way
and
they
can
then
like
create
state
and
do
processing
and
like
update
stuff,
and
so
essentially,
you
get
something.
That's
almost
like
an
l2
and,
and
so
like
we're
pretty
excited
about.
This
is
a
as
a
data
as
a
sort
of
like
new
model
for
doing
stuff.
This
is
what
got
us
particularly
excited
when
we
were
like
hanging
out
in
denver.
C
I'm
talking
about
doing
this
together,
so
it's
called
data
data,
edges
yeah.
I
guess
I
said
that
already
in
terms
of
what
this
gives
us
overall,
like
like
from
an
architectural
perspective.
Essentially,
this
smiley
dude
on
the
left
is:
is
the
oracle,
so
the
oracle
will
get
any
sort
of
prior
prior
blocks
from
the
subgraph
and
then
all
other
blockchains
they'd.
Essentially
then
encode
the
data
pass
or
pass
that
data
as
call
data
to
the
data
edge
like
pass
that
to
the
one.
That's
a
super
simple
contract
on
mainnet.
C
You
don't
have
a
sub
graph
listening
for
any
calls
to
that
to
that
contract
and
that's
actually,
where
all
of
the
execution
logic
lives
actually
like
any
processing.
Any
changes
would
actually
just
live
in
subgraph
definitions
rather
than
in
in
ethereum,
then,
as
an
indexer
you'd
be
indexing.
A
sub
graph
on
another
blockchain
you'd
then
be
able
to
get
the
epoch
block,
which
was
detected
by
the
oracle
from
this
chain
and
then
you'd
be
able
to
close
the
allocation
on
mainnet
with
that
with
that
block.
C
So
super
simple
key
question
here
is
obviously
there's
a
big
question
around
like
operations,
there
are
a
bunch
of
things
which
can
go
wrong
in
this
process.
C
Whether
it's,
whether
it's
like
forks
on
these
other
blockchains,
and
how
you
like,
handle
them
like
like
how
you
handle
situations
where
yeah
like
how
you're
making
decision
around
upgrade
like
like
around
upgrades
or
like
disputes,
are
like,
like
all
those
kind
of
things,
and
so
I
think
this
is
one
thing
where
in
the
gop,
that's
in
progress
really
tried
to
call
out
a
lot
of
the
like
all
of
the
implications
in
all
of
the
different
situations,
because
I
think
this
is
an
area
where
we
do
want
to
beef
up
and
like
improve
our
robustness,
like,
as
I
guess,
core
development
community
and
supporting
these
core
and
protocol
pieces.
C
So
in
terms
of
status,
the
data
registry
app
is
on
the
forum.
The
epoch
block
oracle
gip,
which
is
basically
a
much
more
detailed
rundown
of
sort
of
what
we've
just
gone
through,
is
in
draft
there's
a
proof
concept
of
repo
we're
just
kind
of
doing
this.
C
There's
a
deployment
on
roxton
and
kudos
to
the
people,
who've
everyone
who's
kind
of
swarmed,
so
you
kind
of
think
is
coming
as
an
expert
like
subgraph
author,
just
like
start
decoding,
this
data
zac's
been
doing
some
incredible,
incredibly
efficient
decoding
where
you're
essentially
submitting
because
of
25k
gas,
which
is
tiny
for
quite
big
data,
and
so
we're
at
the
point
now,
where
it'd
be
great
to
start
talking
to
other
people
about
it,
get
general
feedback
so
I'll
pause
there
ariel.
C
I
don't
know
if
I
missed
anything
that
you
would
that
you'd
call
out.
A
I'm
sure
if
you
have
ariel
on
the
call
actually
now
that
I'm
yeah,
I
don't
think
we
have
and
unfortunately
zach
couldn't
also
attend
today,
pretty
sure
he
would
be
very
excited
to
talk
about
this,
but
yeah.
We
have
some
minutes
before
moving
on
to
the
next
one.
So
I
don't
know
if
brandon
wants
to
talk
about
this
as
well.
F
Yeah,
I
guess
one
thing
I
would
just
add
is
that
you
know
this
this
proposal.
I
think
it
really
shines
in
its
simplicity,
so
it
doesn't
require
us
to
make
a
ton
of
changes
to
the
existing
like
allocation
management,
the
poi
data
structure,
it's
gonna,
add
some
sort
out.
Adam
mentioned
some
of
the
operational
considerations.
F
You
know
poi,
given
what's
in
the
various
charters,
you
know,
for
whatever
is
for
what
a
valid
poi
looks
like
and
then
that's
something
that
could
be
handled
out
of
band
in
disputes.
So
I
think
initially
you
know
similar
to
when
we've
rolled
out
other
features
where,
like
they
were
new.
I
think
there
probably
will
need
to
be
some
forgiveness
early
on,
like
maybe
where
the
arbitrator
you
know,
exercises
some
discretion.
F
But
this
is
just
going
to
be
a
new
type
of
thing
that
indexers
need
to
be
concerned
about,
and
a
new
type
of
thing
that
yeah
that
the
arbitrator
needs
to
be
aware
of
so
so
those
I'd
say
were
the
drawbacks
of
this
approach.
Is
that
there's
kind
of
more
human?
You
know
rooms
for
human
error
and
human
subjectivity
sort
of
at
the
edges,
and
but
it's
certainly
not
where
we
want
to
end
up.
F
I
think,
as
adam
alluded
to
you
know,
I
think,
moving
to
l2
and
kind
of
understand
where
the
protocol
logic
will
live.
For
you
know,
some
period
of
time
will
allow
us
to
make
a
longer-term
investment
in
like
what
the
multi-blockchain
bridging
looks
like,
and
there
are
proposals
in
the
form
as
well,
that
we've
discussed
that
are
a
little
bit
more.
A
little
bit
less
objective
rely
on
the
arbitrator
a
little
bit
less
and
allow
for
more
recent
data,
freshness
guarantees.
F
You
know
so
because
of
the
nature
of
this
type
of
bridging
with
the
black
oracle.
It
can't
be
too
granular
because
of
the
things
like
reorgs
and
so
forth.
That
adam
mentioned.
So
some
of
these
other
proposals
that
you
know
we
might
evaluate
in
the
future
would
allow
us
to.
You
know,
for
example,
say
that
this
poi
on
you
know
on
a
solana,
needs
to
be
submitted
within
five
blocks
of
the
chain
head.
C
Yeah,
I
think
it's
it
sadly
like,
like
a
lot
of
these,
these
cases
where
there
is
no
there's,
no
perfect,
there's
no
perfect,
fast
answer,
but
yeah
like
optimistic
that
we
can
do
something
meaningful,
useful
and
interesting
here,
but
then
also
like,
learn
and
prepare
for
the
the
longer
term,
slightly
more
reversed
versions,
yeah
100.
F
F
You
know
we
have
the
arbitrator
role,
we
have
various
fishermen
now
we'll
also
have
like
this
block
oracle,
and
so
you
know,
I
think
it's
good,
that
the
core
devs
have
been
focusing
in
this
area,
but
it's
it's
definitely
an
area
where
I
think
across
our
various
teams
will
wanna,
you
know,
make
sure
we're
all
making.
You
know
the
right
amount
of
investments
and
you
know
and
keeping
these
these
new
services
that
are
spinning
up.
You
know
alive
and
there's
you
know
other
gips
in
the
in
the
forums
that
are
potentially
introduced.
Even
more.
F
You
know
services.
So
this
is
something
that
we'll
be
living
with
for
a
while,
and
so
we
can.
You
know,
I
think
some
of
these
will
be
we'll
be
able
to
eliminate.
You
know
with
future
proposals.
A
Okay
thanks
brandon
yeah,
you
mentioned
something
that
we'll
be
covering
during
the
next
core
dev
call
we'll
you've
just
uploaded
to
the
different
working
groups.
We've
been
formalizing
at
a
core
r
d
level.
Let's
say
so:
that's
something
we'll
cover
in
two
weeks
from
now
so
yeah
people
that
are
listening.
Please
make
sure
that
you
tune
in.
I
think
it's
gonna
be
great
and
yeah.
A
Let's
move
on
to
the
last
thing
we
had
on
the
agenda,
we
have
alexis
from
semiotic,
also
sharing
something
that
I
believe
has
not
yet
been
shared
with
other
working
groups
that
much
so
it's
pretty
exciting.
It's,
I
believe,
the
result
of
months
and
months
of
trialing
of
experimenting
with
how
we
could
build
some
sort
of
automated
framework
for
cost
modeling
or
a
better
cost
modeling
and
a
better
user
experience
we
could
attach
to
agora.
A
So
this
should
be
interesting
to
see
how,
while
we're
building
it
and
how
we're
thinking
of
product
product
productizing
it
so
yeah
alexis,
do
you
want
to
take
the
stage
and
walk
us
through
what
you
guys
have
been
built?
Yeah.
G
A
A
G
Okay,
so
anyway,
so
yeah,
the
work
I'm
gonna
talk
about
is
cosmolog
automation,
so
we
know
that
most
indexers,
don't
wanna,
write
agora
models
by
hand,
and
I
can
understand
that
because
I'm
not
doing
that
either
and
yeah.
In
any
case,
you
may
have
heard
about
our
research
on
machine
learning
used
to
predict
the
costs
of
graphql
queries,
so
basically
a
magic
neural
net
that
takes
graphql,
query
and
speeds
up
a
number,
and
you
know,
as
we've
built
tools
and
we've
analyzed
the
behavior
of
queries
from
the
hosted
service,
etc.
G
We
we
thought
about
a
shorter
term
solution
that
we,
we
call
basically
a
automatic
agora,
automated
agora
or
automated
coil
pricing.
Something
like
that
in
any
case,
so
here
you
can
see
the
the
gist
of
it.
So
this
is
the
the
spirit
of
of
what
we're
trying
to
build,
and
you
can
see
that
we're
so
in
blue.
G
Here
we
have,
you,
have
the
existing
blocks
of
the
index
for
stack
and
we
are
adding
new
new
logical
blocks
that
will
analyze
the
queries
that
the
log,
the
queries
that
the
indexer
receives,
compute
some
statistics
on
it
and
compute
a
cost
for
the
most
common
queries
or
or
whichever
other
statistics
that
an
indexer
would
want
to
use,
and
so,
among
those
those
costs
it
could
be.
The
actual.
You
know
wall
clock
time
of
executing
a
a
query.
G
It
could
be
rerunning
the
query
through
instrumented,
an
instrumented
postgres
instance
that
gives
more
details
about
the
postgres
costs
are
running
through
an
instrumented
graph
node,
which
will
give
more
detail
such
as
the
peak
memory
used
by
that
query
in
particular
how
many
cpu
cycles
were
used
for
that
query?
G
G
Here's
a
rough
idea
of
our
you
know
current
progress,
more
details
about
what
we're
doing
right
now,
so
there
might
be
better
solutions
to
capture
the
the
queries
that
are
received
on
the
indexer
side,
but
we've
decided
for
now
to
pretty
much
capture
the
capture
and
filter
the
standard
output
of
the
graph
node
and
push
the
the
graphql
queries
to
to
a
rabbitmq
queue
such
that
we
can
implement
pretty
much
a
relief
valve.
So
if
our
system
is
slowed
down,
we
don't
slowing
things
down.
G
G
We
normalize
the
query,
so
we
built
a
tool
to
to
normal
the
query
in
many
ways,
so
you
reorder
alphabetically,
you
know
the
arguments
etc.
Remove,
remove
arguments
that
that
shouldn't
exist
by
by
analyzing
also
the
scheme
of
the
sub
graph,
etc
and
yeah
we
store
right
now.
G
We've
come
up
with
a
schema
to
store
those
those
queries
into
a
database,
because
we
wanted
to
separate
the
the
the
query
logs
from
the
other
logs
coming
from
the
graph
node
to
not
pollute
any
other
system
that
you
might
use
to
monitor
your
indexing
operations
and
look
for
for
errors
and
such
so
so
yeah
that
database
stores
the
the
the
logs
in
in
two
tables
and
one
table
stores
actually
the
entry.
Actually
that's
the
next
slide.
Sorry,
so
it
stores
the
data
in
two
tables.
G
One
is
the
actual
logs
and
you
get
the
timestamp,
the
the
the
query
execution
time
from
the
from
the
query,
node
and
the
query
variables
that
were
extracted
from
the
from
the
query
and
we
stored
just
the
query
hash
and
so
that
that's
the
method
for
compressing
those
logs,
because
we've
observed
that
those
locks
can
accumulate
really
fast
and
become
huge.
So
any
compression
here
is
is
good.
I
suppose,
and
then
we
have
another
table,
that
stores
only
the
queries.
G
So
the
corresponds
between
the
the
query,
hashes
and
the
queries
and
those
queries
are,
are
modified
in
such
a
way
that
all
the
variables
are
removed.
So
you
get
those
variables
in
the
in
the
actual
logs
table,
but
in
the
in
the
queries
table
you're,
just
basically
storing
the
the
skeleton
of
of
the
query.
G
So
that's
really
for
the
for
the
compression
method.
It
also
makes
statistics
easier
if
you're
really
looking
for
you
know
how
frequent
a
query
skeleton
is
yeah.
G
So
in
any
case
the
the
idea
and
the
way
the
way.
The
thing
we
want
to
do
from
you
know
by
using
those
statistics
and
pretty
much
the
let's
say
here,
you
want
to
compute
the
cost
for
one
of
the
most
frequent
queries
for
a
particular
subgraph,
and
so
you
would
have
another
another
component
that
will
run
the
measurement
on
on
that
query
and
run
through
undercurrent
through
a
instrumented
graph,
node
instrumented,
postgres,
etc,
extract
a
bunch
of
numbers,
so
that
could
be
itemized.
G
Sql
costs
memory,
cost
used
by
postgres
memory
memory,
cost
on
the
graph
node
cpu
cost
on
the
graph
node
the
size
of
the
response,
which
is
a
good
translation
of
also
what
your
egress
will
be,
and
then
the
the
indexer
can
use
a
a
cost
model
that
is
based
on
those
those
measured
costs
for
the
query
so
slightly
more
intuitive,
and
then
you
can
imagine
also
that
there
is
a
global
cost
multiplier
that
could
be
adjusted
as
a
function
of
competition
on
the
sub
graph.
G
Not
addressing
your
your
overall
shifting
your
cost
such
that
you
still,
you
know,
receive
queries
at
the
right
price
and
you
know
your
server
load.
Also,
you
might
you
could
imagine
that
you
increase
your
costs
as
your
server
gets
gets
more
loads
such
that
you
can
get
less
queries
pretty
much
and
still
deliver
a
good
good
service
quality,
and
you
can
also
imagine
the
having
a
frequency
discount
and
so
as
a
as
a
queries
is
more
frequent.
G
You,
you
might
apply
a
discount
on
the
on
the
price
and
at
the
end
of
this,
so
this
is
influx
right.
This
is
really
a
what
we
imagine
right
now
we
would
be.
G
We
would
compute
right,
but
if
you
have
any
ideas,
we're
open
in
any
case,
then
we
would
automatically
generate
a
an
agora
entry
with
a
price,
and
then
we
would
push
that
directly
to
the
indexer
agent
automatically,
and
so
the
idea
here
is
that
you
would
have
an
end-to-end
system
that
the
indexer
could
tweak
to
to
some
extent
and
it
would,
it
would
automatically
just
continuously
ingest
queries
regularly,
run
the
or
continuously
run.
The
statistics
update
the
agora
models,
push
them
to
index
reagents
constantly,
all
the
time.
G
C
I
might
have
missed
it
in
there,
but
but
this
will
sit.
This
is
all
super
cool,
but
I
think
the
one
thing
which
I
just
like
it
occurred
to
me
is
like,
like
the
other
side
of
the
sort
of
indexing,
cost
of
a
subgraph,
so
obviously
like
like
running
queries
is,
is
one
thing,
but
some
sub
graphs
are
just
much
heavier
to
index
than
others.
I
wonder
if
that's
something
that
you've
like
thought
about,
incorporating
or
if
I
missed
that.
G
G
So
in
terms
of
of
the
economics,
I'm
not
sure
how
you
how
you
apply
that
that
cost,
but
I
guess
you
could
just
apply
a
just
a
cost
multiplier
for
the
whole
subgraph,
but
then
you
would
need
to
also
have
a
an
objective
measurement
of
difficulty
of
indexing.
A
subgraph
I
haven't
looked
into
that
in
particular.
I
would
suppose
that
it's
somewhat
it
could
be
somewhat
easier
than
measuring
the
cost
of
a
query,
because
you
could
you
could
leverage
the
number
of
entities
that
you
have
in
a
subgraph,
for
example,.
F
So
so
one
one
benefit
of
indexing
rewards.
Is
they
kind
of
allow
us
to
abstract
away
the
cost
of
indexing
from
the
marginal
cost
of
like
serving
a
query
right?
So
there's
like
a
fixed
cost
to
indexing
the
subgraph,
and
you
incur
that
cost,
whether
or
not
you
serve
queries,
and
so
the
idea
is
that
in
most
cases
we
would
expect
the
indexing
subsidy
the
index
and
reward
to
cover
the
fixed
costs
of
index
in
the
subgraph.
F
And
so
basically,
there's
like
a
return
to
scale
there,
which
acts
as
like
a
centralization
vector
and
so
part
of
what
having
this
index
and
reward
you
know
allows
us
to
do
is
basically
like
a
small
indexer
can
price
their
queries.
You
know
in
theory
as
competitively
as
a
larger
index,
or
at
least
across
you
know
this
one
dimension,
and
it
also
simplifies,
hopefully,
the
the
the
decision
problem
that
indexers
faced
of
like
how
to
price.
You
know
how
to
price
these
queries.
C
F
Work
done,
do
you
mean
query
served
or
do
you
mean
just
indexing
in.
C
General
indexing
like
indexing
in
general,
like
so
some
indexes,
might
have
indexed
the
subgraph
and
already
be
comfortably
in
the
in
the
black,
whereas
some
may
have
only
just
broken,
even
if,
if
that
makes
sense,.
F
Kind
of
yeah
I
mean
so
there
is
sensitivity
still
to
like
the
duration
of
you
know
how
long
you've
been
inducting
yeah,
there's
like
sort
of
an
upfront
cost,
I
guess
to
getting
caught
up
to
the
chain
head.
Maybe
this
is
what
you're
sort
of
indicating
like
the
longer
you
can.
The
longer
you
can
like
sync
on
a
sub
graph,
then
the
more
likely
you
are
to
recoup
those
costs.
So
that's
kind
of
a
different
different
dimension,
but
maybe
I'm
misunderstanding.
What
you're
saying.
C
I
think
so,
if
you've
got
a
larger,
I
mean
I'm,
I
may
be
misinterpreting
the
the
dynamics
of
the,
but
if
you've
got
a
larger
allocation
like
like,
relatively
speaking
and
you've,
both
indexed
the
same
range
of
blocks,
one
indexer
has
made
money
on
a
subgraph.
The
other
still
needs
to
make
their
money
on
the
queries.
F
That's
that's
correct,
so
there
is,
there
is
a
dynamic
where
yeah
a
larger
allocation
on
a
on
a
sub
graph
does
yeah
does
receive
additional
indexing
rewards.
So
it's
not
perfectly
even
across
the
board.
I
guess
you
could
call
that
a
return
to
scale.
We
do
have
mechanisms.
This
is
maybe
getting
in
the
weeds.
F
We
do
have
mechanisms
that
we've
been
evaluated
on
econ
working
group
to
try
and
make
smaller,
indexers
more
competitive
on
on
specific
subgraphs
and
and
discourage
large
indexers
from
like
taking
up
the
space
on
a
subgraph,
that's
kind
of
an
area
of
active
research.
C
A
And
we're
already
a
bit
overtime,
so
maybe
we
can
stop
here.
Thank
you
all
for
joining.
I
think
this
was
pretty
cool
yeah,
as
mentioned
before.
Please
do
subscribe
to
our
calendar
to
make
sure
you
don't
miss
any
of
these
calls.
Next
one
is
scheduled
for
two
weeks
from
now
so
yeah
I'll
see
you
there.
Thank
you
all
for
joining
have
a
good
day.
Take
care.