►
From YouTube: The Graph - Core Devs Meeting #12
Description
The Graph’s Core Devs Meeting #12
This video was recorded: Thursday, March 31 @ 8am PST, 2021.
The Graph's Media:
Twitter: https://twitter.com/graphprotocol?s=20
Instagram: https://instagram.com/graphprotocol
LinkedIn: https://www.linkedin.com/company/theg...
Website: https://thegraph.com
A
Hey
everyone
welcome
to
our
12th
court
r
d,
call
if
you're
new
here
some
context
we're
having
these
calls
with
core
contributors
every
month.
The
purposes
of
these
calls
are
for
core
contributors
to
present
what
they've
been
working
on
leverage
the
cross
functionality
introduced
by
different
working
groups
and
branch
storm
on
ideas,
new
research
tracks
and
also
get
community
feedback
on
some
of
these
work
streams,
they're
always
recorded
and
uploaded
to
youtube.
So
I
do
invite
you
to
check
our
channel
if
you've
missed
the
last
ones.
A
I'll
paste
in
the
chat
in
show
notes
the
link
to
the
graphs
ecosystem
calendar,
so
it
can
be
subscribed
to
future
ones
and
the
more
general
as
we
call
it
community
talks
so
to
continue.
I
do
have
some
worthy
updates.
The
foundation
might
want
to
share
you've,
probably
seen
this
blog
post,
where
we've
presented
the
d
roadmap.
A
I
can
link
this
as
well,
but
I
also
invite
you
to
listen
to
the
recorded
twitter
space
to
hear
from
the
different
core
contributors
on
how
this
was
a
coordinated
effort
and
understand
what
the
main
priorities
are
really
and
to
conclude.
Brandon,
co-founder
of
the
graph
and
head
of
research
and
product
and
engine
node,
he
also
did
a
great
ama
on
reddit
and
there's
some
interesting
takes
there.
So
I'll
share
all
of
this
in
in
the
show
notes
and
on
this
chat
as
well
all
right,
we
do
have
a
packed
agenda.
A
So
we'll
start
off
with
data
and
api's
working
rule
with
toyota
from
the
guild
showing
us
the
recent
work
on
the
graph
client,
I
believe,
dota,
I
believe,
yeah.
I
believe
that
will
also
do
a
demo
showcasing
exciting
features
such
as
the
client
side,
subrav,
current
position,
which
is
pretty
cool
and
then
we'll
go
deep
into
protocol
econ
hearing
from
multiple
core
contributors
on
the
recent
work
that
will
to
some
degree,
have
an
impact
on
different
network
participants.
So
it's
good
to
discuss
those
things
here.
A
We'll
start
we'll
start
with
prism
group,
I'm
not
sure
if
you've
heard
from
prism
before
they're
a
long
time
bringing
in
their
expertise
in
neoclassical
economic
modeling,
we
have
yuji
on
the
call
and
so
he'll
share
more
about
their
work.
Focusing
on
specifically
this
recent
analysis
on
on
of
the
outcomes
of
a
of
an
open,
zap
and
security
audit,
they
saw
that
it
will
be
open
sourced
by
the
time
we
publish
prism's
papers,
their
recommendations
and
the
related
gip.
A
That's
in
the
works
as
an
outcome
of
this
working
working
group
I'll
refer
to
brandon
in
a
bit
to
introduce
this.
A
This
topic
and
walk
us
through
the
context,
context
and
relevancy
here
really,
and
I
will
go
over
mechanisms
impacting
mainly
subgraph
developers
and
curators,
specifically
the
proposed
principal
protected,
bonding
curves
and
related
work
on
decaying
creation,
curation
tax
and
the
signal
renting,
which
is
fairly
new
and
finally,
directly
impacting
indexers
indexer
economics
we'll
do
a
quick
intro
on
stale
allocations,
take
rebate
and
subsidize
query
settlement
branded
can
walk
us
through
here,
still
early
work
with
more
analysis
to
follow
soon
but
worth
taking
a
call
to
have
a
preview
and
understand
the
motivation
behind
all
of
these.
A
So
it's
a
packed
agenda.
I
just
hope
we
do
have
enough
time
for
all
of
this.
Okay,
so
enough
for
me,
I
think
we
can
get
started..
I'm
gonna
stop
sharing
my
screen
and
if
you're
ready,
you
can
take
it
off.
A
B
Yeah
perfect
now
I
can
unmute
myself
thanks
cool.
So
while
I'm
sharing.
B
Just
say
that
I
plan
two
hours
demo
just
kidding,
but
don't
worry
pedro
only
ten.
B
Yeah,
it's
packed
so
I'll,
try
to
squeeze
as
much
as
possible,
so
I'll
start
with
a
quick
overview
of
what
we
did.
So
we
at
the
guild
started
working
with
the
graph
and
we
looked
into
how
we
can
improve
the
developer
experience
of
depths
and
improve
the
way
that
the
right
graphql
and
the
way
that
they
query
the
graph.
B
So
this
was
our
initial
inspiration
that
we
did
a
few
meetings
with
apps
developers
and
we
tried
to
find
all
the
pain
points
and
the
actual
issues
and
we
tried
to
pack
all
of
these
into
one
graphql
client
we're
calling
it
the
graph
client
tools.
I'll
explain
a
bit
on
that.
We
have
a
packed
list
of
features
and
we're
able
to
integrate
with
basically
everything
in
the
graphql
ecosystem
today.
B
But
I
would
start
with
just
like
the
general
architecture
first
and
then
I'll
do
a
quick
demo.
So,
basically,
the
graph
client
is
a
package
containing
tools
that
improves
the
developer,
experience
and
improves
type
safety
and
the
way
that
you
execute
graphql.
B
It
comes
with
a
lot
of
cool
features
like
client-side
composition,
which
I'll
demo
in
a
minute,
and
a
few
more,
it's
important
for
me
to
mention
that
this
was
an
amazing
journey,
because
we
developed
graphql
mesh,
which
is
an
open
source
that
basically
combines
and
transforms
anything
into
graphql
and
there
we
had
a
layer
that
knows
how
to
compose
different
layers,
different
graphql
endpoints,
which
is
basically
subgraphs,
as
we
know
it
from
the
graph.
B
So
our
idea
was
to
take
these
tools
and
leverage
open
source
in
order
to
make
something
new
for
the
graph
and
improve
the
way
that
we're
occurring.
So
we
had
two
modes
and
today
I'll
show
only
the
standalone
modes
but
mode.
But
it's
important
to
me
to
say
that
there
are
two
modes:
you
can
either
do
standalone
mode
where
you
just
use
the
graph
client
directly,
and
this
gives
you
all
the
orchestration
and
query
planning
and
you
can
query
as
much
as
subgraphs
as
you
want.
B
You
don't
really
need
any
infrastructure,
you
don't
even
need
a
front-end
project
or
a
node
project.
You
can
just
use
graphical
out
of
the
box.
I'll
show
it
in
a
second
and
besides
that
we
have
an
integration
layer
that
provides
a
compatibility
layer
for
all
the
popular
graphical
clients
around.
If
it's
a
polar
client
oracle,
you
can
just
do
fetch
directly.
Anything
you
want.
So
this
is
like
another
option
to
use
a
graphclient.
It
just
wraps
everything
that
we're
doing
in
the
standalone
mode.
B
That
makes
it
easy
to
integrate
so
I'll
start
a
bit
with
just
a
demo
for
the
client-side
composition,
there
is
a
complete,
like
a
whole
list
of
features
that
we
have
here.
Maybe
I'll
have
a
few
more
minutes
to
go
over
this
after
the
demo.
So
for
the
demo,
what
I
prepared
is
basically
a
very
simple
example
of
taking
a
sub
graph
and
merging
it
with
another
sub
graph.
B
So
our
entry
point
is
basically,
let's
say
I'll
start
with
the
node
project
and
I
have
a
yaml
file
that
defines
basically
everything
that
I
need
in
order
to
fetch
the
data
here
I
can
define
like
all
my
sources,
if
I'm
using
multiple
indexers
that
are
distributed,
I
can
use
it
if
I
have
an
indexer
that
I'm
not
fully
trust,
and
maybe
I
want
to
fall
back
to
the
hosted
service
on
the
same
subgraph.
I
can
even
do
that,
so
you
can
see
that
I
can
really
control
on.
B
I
have
full
control
on
the
the
network
aspect
and
the
execution
aspect
of
graphql.
So
for
this
example,
here
I
have
two
subgraphs
one
is
uniswap
and
the
other
one
is
compound,
and
basically
this
example
takes
both
of
them
defines
like
the
actual
sources
where
the
subgraph
is
deployed.
B
Here
I
just
did
one
that
doesn't
really
work
or
doesn't
really
exist
and
the
other
one
is
a
fallback
to
show
how
I
can
just
fall
back
to
the
hosted
service-
and
I
have
another
one
here,
and
this
is
like
a
very
naive
way
of
merging
two
subgraphs.
So
this
is
all
that
I
need
in
order
to
have
this
client
side,
composition
and
client
actually
running.
I
don't
really
even
need
a
whole
project
set
up.
B
I
can
just
do
yarn
graphical
and
basically
behind
the
scenes.
I
get
these
two
subgraphs
introspected
and
the
definition.
D
B
Network
structure
that
I
define
here
to
do
like
a
fallback.
This
is
actually
running
even
now.
It's
not
only
a
runtime
thing.
It's
everything
you
define
here
is
both
both
for
development
purposes,
and
you
can
also
reuse
that
configuration
for
your
actual
runtime
of
your
deck.
So
what
happens
here
behind
the
scenes?
We're
introspecting
these
two
and
then
merging
them
in
a
supernatural
way,
since
there
are
no
real
conflicts
between
these
two
subgraphs.
Everything
is
just
fine,
so
I
just
did
yarn
graphical,
and
this
is
what
I
got.
B
This
is
the
graphical.
This
is
the
new
graphical.
You
can
also
see
that
now
we
have
tabs.
This
is
a
really
cool
effort
done
by
the
guild
in
order
to
take
graphical
into
the
next
level.
So
what
we
did
here
first,
we
got
these
two
subgraphs
and,
as
you
can
see
here,
they're
both
merged
into
one
graphql
schema
right.
B
D
B
Is
done
by
schema,
stitching
and
graphql
mesh
under
the
hood,
so
I'm
just
squaring
and
as
you
can
see,
I
got
one
single
response
for
both
subgraphs
under
the
hood
on
the
network
layer.
If
I'll
take
a
look
here,
yeah
you
can
just
see
one
query
being
executed,
because
I
have
this
tool
running
in
development.
B
That
knows
how
to
orchestrate
all
the
code,
but
if
I'll
integrate
that
into
my
depth
or
from
the
application,
I
would
see
four
http
calls
here
actually
because
I
have
a
fallback
here,
so
this
one
is
failing
because
of
bad
dns
property
and
then
this
one
is
actually
working
same
here
and
each
part
of
the
query
is
executed
across
the
actual
subgraph.
B
B
B
So
the
other
example
that
I
made
here
is
a
bit
more
advanced,
so
we're
taking
two
subgraphs
and
we're
doing
some
transformations
and
some
cool
stuff
that
I'll
mention
a
bit
like
the
technical
aspect.
So
I'm
taking
here,
one
is
like
nfts
and
the
other
one
is
uni
swap,
and
what
I'm
trying
to
do
here,
I'm
using
a
concept
that
is
called
type
merging.
Basically,
I'm
taking
two
graphql
schemas,
two
sub
graphs
and
I'm
taking
two
different
types.
Each
type
is
defined
in
a
separate
subgraph
and
I'm
merging
it
together.
B
So
from
the
point
of
view
of
the
query,
the
graphical
query,
I
don't
even
need
to
worry
about
these
two
types
or
syncing
mechanism,
or
even,
if
I'm
using
apollo,
client
or
urkel.
I
don't
need
to
worry
about
caching,
because
the
caching
is
now
unified,
because
there
is
just
one
graphical
schema,
so
I'm
taking
these
two
endpoints,
these
two
sub
graphs
and
then
doing
some
manipulation.
I
go
I'll
go
over
it
in
a
second
I'll,
just
want
to
show
you
how
it
works.
B
So
basically
what
happens
here
we
have
this
graphical
again,
and
this
query
that
I
have
here
is
far
more
sophisticated,
because
what
happens
here
is
that
I'm
taking?
Let
me
just
try
to
find
the
type
user
here,
as
you
can
see
here,
type
is
actually
a
combination
of
type
and
the
count
two
different
graphql
types,
two
different
entities
that
are
merged
together
and
everything
is
happening
locally
right.
It's
not
really
happening
on
graph
node
or
the
indexer
just
happening
here
locally
and
distributed
across
the
two
sources
in
a
separate
way.
B
So
this
example
here,
I'm
just
running
sure
this
works.
Basically,
it
takes
two
types
of
merging
together
and
the
way
that
we're
doing
it
is
like
only
in
a
declarative
way.
We
didn't
really
even
write
a
single
line
of
code
here
in
order
to
do
this
kind
of
merging.
So
basically,
we
can
define
again
like
the
handler,
the
sub
graph,
the
endpoint
that
we're
using
and
then
the
transformations
is
the
coolest
thing
here.
First,
we
want
to
prefix
everything
coming
from
the
nft.
B
B
There
are
a
few
more
configurations
here,
but
basically
we're
adding
a
prefix
we're
renaming
the
nft
account
into
a
type
called
user
and
then
we're
doing
type
merging
and
type
merging
is
basically
merging
a
type
from
one
schema
into
a
type
from
another
schema.
And
then,
when
I'm
querying
it,
I
can
just
ask
for
fields
across
two
subgraphs.
So
what
happened
here?
I
got
one
type
called
user
that
is
merged
with
another
type
called
user,
and
now,
when
I'm
querying
it,
you
can
see
that
I
can.
B
B
So
this
is
a
super
powerful
feature
under
the
hood.
It's
called
schema
stitching.
We
call
it
client-side
composition,
because
we
can
very
easily
take
two
subgraphs
and
merge
them
together
here
locally.
So
this
is
just
client-side
composition.
I
want
to
show
more,
but
I
think
I
don't
have
a
lot
more.
D
B
B
You
get
support
out
of
the
box,
we're
using
a
concept
called
type
document
node,
so
we
basically
type
the
actual
operation
and
when
you
use
the
graphql
query
to
execute
you
get
a
typed
response,
so
everything
is
fully
typed,
so
you're
not
going
to
have
like
any
runtime
issues
so
yeah.
This
is
10
minutes
on
the
graph
client
yeah.
A
A
Yeah
just
try
it
try
it.
Try
it
out
reach
out
on
discord.
I
see
slim
chances
also
asking
questions.
Maybe
you
can
take
these
in
the
chat
time
while
we
move
forward,
but
I
can
see
an
opportunity
here
where
you
can
probably
take
some
more
time
and
leverage.
You
know
what
slim
chance
has
been
doing
on
on
on
these
questions
or
more
of
the
developer
focused
calls
and
go
deep
into
into
this
client
with
some
of
the
some
rough
developers.
A
D
Sure
yeah,
I'm
just
teaming
this
one
up
so
I'll,
make
it
quick.
But
about
a
year
ago,
before
the
network
launched,
there
was
an
audit
by
open
zeppelin,
which
is
you
know,
part
of
our
kind
of
you
know
due
diligence
in
launching
the
protocol.
D
There
were
a
couple
items
in
there
that
were
left
outstanding
going.
You
know
beyond
the
protocol
launch
that
open
zeppelin,
given
that
it
was
a
technical
audit,
they
were
sort
of
unwilling
to
say
for
sure
that
that
these
things
were
mitigated,
because
the
mitigations
were
economic
in
nature
and
required
economic
assumptions
rather
than
technical,
technical
assumptions
and
so
about.
D
I
don't
know.
Eight
to
nine
months
ago
now
the
foundation
issued
a
grant
to
the
prison
group
team
to
take
a
look
at
some
of
these
economic
issues
and
see
if
we
could
get
sort
of
the
seal
of
approval
on
these
two
items.
D
So
one
of
them
was
what's
called
delegation
front
running
it's
this
idea
that
a
delegator
could
come
in
try
and
delegate
to
an
indexer
right
before
they
claim
indexing
rewards
to
try
and
front
run
the
delegators
that
are
already
on
it
and
the
other
was
this
idea
that
they
called
poi
spoofing,
which
was
this
notion
that
an
indexer
might
delegate
to
themselves
collect
indexing
rewards
get
slashed,
but
that
the
index
and
rewards
that
they
send
to
themselves
via
the
delegation
mechanism
might
exceed
the
slashed
amount.
D
So
you
can
kind
of
see
how
like
it's,
whether
or
not
this
is
profitable
is
really
about
the
economics
of
the
system.
So
this
turned
into
a
multi-month
research
effort
with
prism
and
a
lot
of
really
great
work
came
out
of
it
and
I
guess
I'll.
Let
youtube
take
it
from
here.
C
C
Yeah
very.
B
C
To
be
our
first,
this
is
our
first
time
participating
in
the
core,
damn
call,
so
I
guess
to
folks
that
who
are
not
familiar
with
us.
I
just
want
to
give
a
very
quick
introduction,
so
we
are
an
economic,
consulting
firm
that
specializes
on
blockchain
technology.
We
have
extensive
collaboration
on
both
education,
every
research
front,
with
a
lot
of
the
leading
economic,
academic
and
government
institutions.
C
I
know
that
the
term
classical
economics
floats
around
the
community
quite
a
bit.
It
could
be.
It
could
mean
very
different
things
to
different
people,
so
I
would
like
to
clarify
that
the
approach
that
we
use
and
the
tools
that
we
have
are
actually
are
actually
way
beyond
what
is
naturally
defined
as
classical
economics,
for
example,
our
two
founding
economists.
They
were
both
training
harvard
and
then
they
have
deep,
deep
expertise
in
designing
contracts
and
market
structures,
and
this
is
something
that
we
take
to
to.
C
This
is
what
we
use
to
apply
to
work.
We
don't
we
do
with
the
graph
and
beyond.
So
this
is
the
current
thing
that
this
is
the
current
project
thing
that
is
working
with
the
graph.
We
we
combine
experiences
in
both
economic
academic
policy,
research
and
also
business
and
management
decision
making
before
moving
on
to
the
the
attack
itself.
So
I
guess
to
give
a
background
of
what
we
are
doing
here.
C
So
when
we
started
engaging
with
the
graph,
we
did
a
step-by-step
economic
mapping
of
every
transaction
on
the
graph
network,
just
to
highlight,
which
ones
are,
might
have
economic
significance,
which
are
the
open
design,
questions
that
needs
to
be
addressed
or
optimized.
So
we've
it.
It's
been,
I
think,
9
10
months,
it's
very
exciting,
to
see
a
lot
of
improvement
happening
in
comparison
to
what
we
mapped
back
then,
and
then
we
are
working
on
a
few
different
fronts.
C
So
this
is
the
first
effort
that
that
is
translating
into
a
gip,
and
then
we
think
we,
we
might
have
a
few
others
coming
forth
force
coming
depending
on
how
our
collaboration
works,
with
both
the
graph
team
and
also
some
of
the
other
teams
in
the
ecosystem,
like
semiotic
and
the
block
science.
C
So
so,
just
to
reiterate
what
the
attack
is
really
about
and
to
to
pinpoint
what
are
the
key
parameters
that
decide
the
profitability
of
the
attack.
So
we
have
someone
who's
trying
to
basically
get
indexing,
rewards
for
free
and
then
in
order
to
do
that,
this
attacker
would
open
an
allocation
and
self-delegate
typically
up
to
the
maximal
delegation
ratio
and
they
want
they
want.
They
would
want
to
do
that
because
they
want
to
minimize
the
the
proportion
of
their
capital.
C
That's
subject
to
slashing,
of
course,
when
they
self-delegate
they
need,
they
still
need
to
pay
a
delegation
tax,
but
the
under
most
scenarios.
The
the
delegation
tax
is
it's
reasonable
to
expect
that
to
be
lower
than
the
the
slashing
risks,
and
then
this
attacker
will
submit
an
invalid
poi.
C
This
is
something
that
they,
this
could
just
be
something
that
they
make
up
without
actually
doing
the
work
of
syncing,
the
subgraphs
and
then
right
after
that
they
will
claim
the
reward
and
divert
the
film
so
that
the
reward
itself
is
also
not
subject
to
to
with
to
withdraw
or
or
slashing
the
only
the
only
cost
that
they
make.
The
only
like
numerical
cost
that
they
might
they
might
be
worried
about
is
the
part
of
the
the
allocation
that's
subject
to
flashing.
C
So
so
to
translate
that
into
an
objective
or
payoff
function
for
the
attacker,
we
have
as
a
benchmark
the
honest
indexers
who
are
assumed
to
be
financially
motivated.
We
know
a
lot
of
folks
in
the
in
the
community
are
altruistic,
but
this
is
just
to
help
us
pin
down
what
the
economic
environment
is
of
for
attacker
to
potentially
come
in
and
then
reap
some
reward,
and
then
the
the
spoofer
or
the
attacker.
C
Has
some
a
priority
incentive
to
to
engage
in
this
attack
because
to
in
order
to
produce
a
valid
poi?
You
need
to
do
some
hard
work
and
then
the
attacker
might
want
to
get
the
reward
without
incurring
that
cost.
So
we
call
that
cost
the
operational
cost.
C
This
is
something
an
honest:
indexer
will
have
to
pay,
whereas
the
attacker
can
avoid
and
then,
for
example,
this
could
be
the
cost
of
running
an
ethereum
archive
node
and
also
all
of
the
associated
software
to
to
support
indexing
and,
by
the
same
time,
attacking
is
not
for
free,
as
we've
mentioned
that
the
honest
indexes
they
don't
need
to
worry
about
being
slashed
as
long
as
everything
works
as
they're
intended,
but
the
the
attacker
faces
its
a
probability
of
being
found
out,
and
then
this
is
the
parameter
that
we
can
manipulate
and
then
ricky
and
craig
will
talk
more
about
that.
C
So
I'm
here
just
I'm
just
setting
up
the
the
analytic
framework
here,
so
they
face
a
slashing
risk
and
then,
as
as
I'll,
show
in
a
few
later,
slides
that
that
would
translate
into
a
higher
risk-adjusted
cost
of
capital
and
then
thing
for
or
the
design
problem
for
us
is
to
figure
out
the
optimal
combination
of
policy
parameters
that
makes
such
an
attack
not
only
unprofitable
but
robustly
unprofitable,
while
not
introducing
additional
too
much
additional
distortion
to
the
network.
C
So
this
is
the
design
question
and
giving
the
time
constraint.
C
I
don't
think
I
would
want
to
explain
the
profit
function
and
how
we
derive
this
in
detail,
but
the
rough
idea
is
that
the
attacker
needs
to
decide
how
much
to
allocating
total
and
then
for
that,
for
that
sum
they
they
additionally
need
to
decide
how
much
to
self-stake,
how
much
the
stake
as
an
index
and
how
much
to
self-allocate
itself
to
how
much
to
self-delegate
and
that
part
would
can
help
them
to
can
show,
can
shield
them
from
slashing
and
then
there's
also
this
fixed
cost.
C
In
order
to
characterize
the
condition
under
which
the
attack
is
unprofitable,
basically,
we
compare
the
problem
the
to
the
profit
for,
on
the
same
side,
on
the
same
sub-graph,
the
same
economic
condition.
What
an
individual
indexer
would
get
as
an
honest
attacker
versus
what
how
much
they
will
get
as
an
as
an
attack
as
a
as
an
attacker,
and
we
want
the
profit
for
the
honest
indexer
to
be
higher,
and
then
that
gives
us
this
inequality.
C
The
key
parameter
here
is
this
risk
and
tax
adjusted
cost
of
capital
r?
So,
for
the
honest,
indexer,
there's
an
opportunity
cost
if
they're,
locking
up
their
capital
for
indexing
they're,
foregoing
some
other
ways
that
they
could
also
earn
a
return,
and
that's
supposed
to
capture
that.
But
on
top
of
that
we
also
have
a
more
complex,
complicated
term
that
captures
the
the
trade-off
for
the
attacker
between
self-delegating
and
staking.
C
Then
the
attacker
is
solving
a
linear,
optimization
problem
where,
if
they
allocate
their
funds
to
a
self-delegation,
they
pay
a
delegation
text
of
tau
d,
and
if
they
put
funds
to
to
stake
to
themselves,
then
there's
then
they
they
have.
They
faced
an
expected
cost
of
flashing,
which
is
the
probability
of
being
detected
times.
The
portion
of
the
slashing
portion
and
one
of
the
the
two
parameters
is,
I
guess,
harder
for
us
to
control.
Where
is
the
second
one?
C
And
then
the
decision
here
comes
down
to
the
comparison
between
slashing
risk,
sigma
psi
and
the
delegation
tax
tau.
If
slashing
risk
is
really
low,
then
of
course
the
the
attacker
would
want
to
put
all
of
the
capital
and
to
self-stake
and
then,
conversely,
if
flashing
risk
is
high,
then
the
attacker
would
want
to
put
as
much
money
into
a
self-delegation
and
that's
when
they
would
want
to
push
all
the
way
until
they
hit
the
delegation,
the
max
allegation
ratio.
C
So
these
two
slides
they
defined
they
defined
how
we
derived
this
equation
are,
and
then
we've
done
some
preliminary
numerical
analysis
and
then
in
order
to
fully
characterize
our
our
choices
and
then
determine
what's
the
best
thing
to
recommend
in
a
git
the
the
fantastic
data
science
team
at
edge.
I
know
they're
doing
a
much
broader
parameter,
sweep
and
the
sensitivity,
analysis
and
I'll
refer
to
them
on
their
work
and
the
visualization
of
the
results.
D
A
Let
me
know
if
you
guys
can
share
your
screen
or
if
you
need
some
help,
I
can
do
it
for
you.
If
you
just
send
me
the
right
link.
A
Oh,
you
can't
unmute.
Okay,
I
see
we
see
your
screen,
but
there's
no
audio
coming
through
as
a
fallback.
Do
we
want
ricky
or
brandon
to
take
over
while
you
try
to
maybe
restart
zoom
correct.
D
I
think
it's
just
does
it
need
to
be
given
permissions
on
you.
E
There
we
go.
Can
you
hear
me?
Okay,
now,
I'm
good
yeah
cool
perfectly
hey.
My
name
is
craig:
I'm
the
data
science
lead
at
edge
of
node,
and
today
I'm
going
to
be
presenting
the
work
of
my
colleague,
ricky
escapon.
E
He's
done
some
awesome
work
following
up
on
prism's
model
and
yeah,
let's
get
into
it
yeah.
So
I
think
ug
gave
a
good
explanation,
but
you
know
what
we're
trying
to
solve
for
here
is
the
attack
identified
in
the
open,
zeppelin
audit,
where
so,
where
an
indexer
minimizes
their
self-stake,
which
is
the
value
at
risk
of
slashing
and
maximizes
self-delegation
from
another
wallet
and
then
spoofs
pois
such
that
they
don't
have
to
incur
the
cost
that
an
honest
indexer
does
in
terms
of
index
and
sub-graphs
serving
queries,
etc.
E
So
the
goal
is
to
identify
new
protocol
parameters
that
make
this
attack
vector
unprofitable
under
reasonable
economic
assumptions
and
yeah,
and
do
that
in
a
minimally
invest
invasive
way.
E
So
these
are
the
parameter,
the
main
parameters
that
the
prism's
prism
model
defines
and
the
first
five
are
in
our
control.
Those
are
protocol
parameters
that
we
can
move
around.
E
The
last
two
are
cost
parameters
for
honest
and
dishonest
indexers,
and
I
think
one
of
the
one
of
the
main
findings
from
our
analysis
is
that
that
this
is
like
the
primary
factor
that
determines
the
attack:
profitability,
the
the
cost
difference,
obviously
between
the
honest
and
dishonest
indexers.
So
so
the
values
that
you
need
to
give
in
the
protocol
parameters
to
prevent
against
such
an
attack
is,
you
know,
extremely
sensitive
to
the
cost
profile.
E
E
So
the
pathway
through
which
that
impacts
the
the
the
profitability
of
this
attack
is
that
when
you
increase
curation
signal,
you
also
increase
the
indexing
rewards
which
incentivizes
more
competition
on
a
subgraph
from
other
indexers
that
reduces
the
profitability
of
of
an
attacker.
E
But
yeah
like,
like
I
said,
the
the
profitability
is
mainly
determined
by
the
difference.
The
difference
in
the
cost
profile
of
an
honest
and
dis
dishonest
indexer,
and
we
can
kind
of
see
this
in
the
the
equations
that
the
prison
group
defined
so
on
the
left.
E
E
One
of
the
things
you
know
we
really
focused
on
was
reaching
out
to
the
community
getting
like
leaning
on
their
domain,
knowledge
of
the
cost
functions
for
for
indexers,
and
then
also
you
know,
what's
the
minimum
required
for
for
a
poi
spoofing,
so
yeah
so
so
like
initially,
these
were
defined
as
like
having
an
indexing
node
versus
the
operational
costs
that
that
vary
in
the
number
of
queries
served,
but
that
doesn't
exactly
match
up
to
like
how
this
attack
would
be
operationalized
and
also
you
know
what
what
the
cost
profile
of
an
honest
indexer
is.
E
So
I
mean,
if
you
just
take
the
formula
formula
literally
your
the
honest,
indexer
ci
is
actually
the
cost
that
both
honest
and
dishonest
indexers
face
to
submit
pois,
collect
indexing,
rewards
and
c0
is
the
cost
that
only
in
on
an
honest
indexer
faces
and
this
understanding
it.
E
This
way
actually
puts
us
in
a
pretty
good
car
spot
right
now
to
prevent
this
attack,
especially
under
a
high
gas
cost
regime,
because
both
honest
and
dishonest
index
or
space
gas
costs
and
at
least
qualitatively
what
we
hear
from
indexers
is
that
you
know
this
is
kind
of
like
the
dominant
factor
in
determining
how
many
subgraphs
that
they're
going
to
index.
E
But
the
hardware
you
know
hardware
would
allow
for
them
to
index
many
subgraphs
and
since
since
this
is
cost
per
per
subgraph,
the
more
sub
graphs
that
that
indexers
are
are
serving
on
the
same
hardware.
That
brings
down
the
the
on
a
specific
indexer
cost
and
such
that,
like
the
gas
cost,
is
actually
you
know
much
larger.
E
Obviously,
that
won't
be
the
case
in
the
long
term,
but
but
that
does
put
us
in
a
very
good
spot
for
preventing
the
attack
now
with
without
extreme
values,
for
for
a
minimum
curation
signal
yep,
so
the
the
initial
yeah,
the
initial
estimates
were
much
higher
than
than
we
found
when
we
discussed
with
with
indexers.
So
you
know
it,
it
sounds
like
the
minute.
E
You
know
the
the
most
competitive
pricing
for
cloud
hardware
for
recommended
setup
or
even
higher
than
the
recommended
setup
is
around
600
a
month
in
infra
cost,
and
you
could
easily
index
10
sub
graphs
on
that,
bringing
us
down
to
a
cost
of
about
60
per
month
per
subgraph
and
in
reality
you
know
there.
There
are
people
that
index
over
30
on
on
such
a
setup,
30
subgraphs,
the
lighter
weight
ones
so
so
yeah
we
can
bring
that
cost
down.
Quite
a
bit
for
the
honest
indexers.
E
Yep
so
ricky
did
yeah.
He
created
some
awesome
tools
for
us
to
do
grid,
search
basically
with
different.
You
know
moving
around
different
parameters.
E
He
also
made
some
notebooks
to
replicate
the
formulas
and
you
know
in
in
latex
as
well,
so
that
we
could,
you
know,
cross
check
in
and
replicate
the
analysis
and
yeah.
This
is.
E
This
is
all
very
tentative,
but
when
we
produce
a
heat
map
with
with
the
likelihood
of
being
detect
of
spoofing
being
detected
on
the
y-axis
and
the
slash
rate
on
the
x-axis,
if
we
look
at
the
current
0.025
or
two
and
a
half
percent
slash
rate,
if
we
assume
you
know
very
conservatively
that
there's
a
20
percent
chance
of
of
poi
spoofing
detection,
which
we
think
it's
actually
much
higher
than
that,
that
gives
us
a
minimum
curation
parameter
of
290
grp
to
prevent
against
this
attack
and
satisfy
the
zero
profit
condition,
so
that
actually
works
out
quite
well
yeah.
E
I
think
there's
some
old
language
on
the
last
slide,
but
you
know
it's
actually,
you
know
quite
good,
I
think
only
13.9
of
sub
graphs
with
more
than
10
grt.
So
we
have
a
lot
that
are
like
zero
that
were
kind
of
like
trials
subgraphs,
but
those
above
10grt
only
39.9
of
subgraphs
would
be
affected
that
are
currently
on
the
network,
so
we're
going
to
do
some
sensitivity,
analysis,
robustness,
checks
and
other
stuff.
E
So
this
is
all
very
tentative,
but
we'll
produce
a
gip
proposal
for
changing
that
parameter,
definitely
interested
in
hearing
the
the
community's
feedback
on
that
idea
and
other
ideas
that
you
might
have
some
other
things
that
we
thought
about,
or
just
like
anything
that
increases
the
cost
of
submitting
a
spoofed
poi
or
just
any
poi,
such
as,
like
minimum
hardware
requirements.
You
know
like
that,
basically
can't
forego
that
cost.
E
If,
if
that,
if
the
telemetry
is
broadcast-
or
you
know,
there's
some
sort
of
proof
of
work
task
that
we're
not
seriously
considering
these,
but
this
is
just
kind
of
like
brainstorming
other
ways
that
that
would
prevent
against
such
an
attack,
yeah
and
yeah,
and
also
decreasing
gas
costs
could
actually
have
a
positive
effect.
If
it's
going
to
bring
down
that
cost
of
honest
indexing.
E
You
know
such
that
you
know
people
take
the
200
of
you,
know
gas
cost
per
subgraph
that
we're
spending
opening
and
closing
allocations,
and
they
just
use
that,
to
you
know,
index
more
sub
graphs
on
the
same
same
hardware,
that's
going
to
bring
down
the
honest
index
or
cost
per
sub
graph.
E
You
know-
and
you
know
we'll
actually
could
actually
work
in
our
favor
there,
so
yeah
so
we're
gonna
do
some
sensitivity,
analysis
for
exogenous,
economic
factors,
opportunity
costs,
interest
rates,
grt
prices,
gas
costs,
etc,
and
then
we'll
we'll
give
a
proposal
and
put
that
out
soon.
Thank
you.
D
Thanks
craig
thanks
thanks
eugene
yeah,
so
just
to
put
a
pin
in
that
the
minimum
signal
parameter
that
is
actually
a
new
parameter,
so
ariel
has
done
the
smart
contract
development
for
that
already,
that's
already
been
audited,
so
you
know,
as
craig
mentioned.
Next
steps
are
really
just
to
do.
D
The
sensitivity
analysis
find
the
right
value
to
set
that
parameter
to,
and-
and
you
know
essentially
at
that
point
we'd
consider
this-
you
know
a
really
hardened,
really
hardened
analysis
with
respect
to
you
know
how
viable
that
type
of
attack
is
the
positive
impact
to
the
ecosystem.
Is
that
finally
lets
us
publish
the
original
open,
zeppelin
audits
because
it
allows
us
to
disclose.
D
You
know
these
two
attacks
that
you
know,
we've
been
spending
basically
months
doing
this
kind
of
analysis
to
try
and
you
know
really
make
sure
that
we've,
you
know
dotted
our
dotted
our
eyes
crossed
our
t's
and
that's
a
precursor
to
actually
a
lot
of
d5
integrations
with
the
graph.
So
having
that
audit
out
there
will
allow
the
graph
to
interact
to
interoperate
and
integrate
with
a
lot
more
protocols
in
the
dfi
ecosystem.
D
So
that's
another
positive
outcome
of
us
being
able
to
to
publish
these
these
audits
cool
pedro.
Do
you
want
me
to
just
jump
right
into
the
next
sections.
A
Yeah,
I
think
so.
I
think
we
can
we've
wrapped
up
thanks
for
the
context
and
motivation,
so
yeah
we're,
I
think
it's
clear.
So
we
have
15
minutes.
You
have
two
segments
might
as
well
yeah.
D
Yeah,
that's
going
to
be
tough,
but
I'll
I'll
do
my
best.
Can
you
guys
see
my
screen?
Yeah
we
go
cool,
so
I
basically
want
to
give
the
community
an
update
on
a
bunch
of
gips
that
are
coming
through
the
forums
I'm
not
going
to
have
nearly
enough
time
here
to
do
them
justice.
So
please
check
these
out
would
be
my
my
main
call
to
action.
The
first
bucket
of
gips
are
really
focused
on
subgraph
developer
improvements
and
the
big
one
here
is
principal
protected,
bonding
curves.
D
This
was
an
idea
proposed
by
a
member
of
our
community
juan
from
graphops,
and
the
motivation
here
is
twofold:
one:
it's
recognizing
that
within
the
curator
role,
there's
actually
two
personas,
there's
like
the
financially
motivated
curator
and
the
sub
graph
developer
and
the
subgraph
developer
is
really
just
trying
to
get
utility
out
of
the
network
they're
not
trying
to
get
like
financial
upside
in
the
curation
mechanism.
They
don't
need,
like
the
volatility
of
like
having
their
signal,
you
know
potentially
lose
or
gain
value.
D
Meanwhile,
on
the
on
the
financially
motivated
curator
side,
we're
also
seeing
mev
style
activity
in
the
subgraph
deployment,
bonding
curves
things
like
ap
and
things
like
run,
runs
on
the
bank,
things
like
sandwich
attacks,
and
so
this
aims
to
mitigate
both
of
those
sets
of
problems
and
specifically
on
the
mbb
side.
It's
at
the
subgraph
deployment
level.
So
just
to
do
a
quick
recap.
D
Most
people
may
not
might
not
be
aware
of
this,
but
the
curation
market
in
the
graph
actually
uses
a
nested
bonding
curve
architecture.
So
you
have
inner
bonding
curves,
which
are
the
subgraph
deployment
bonding
curves.
These
tend
to
be
short-lived
because
they
only
basically
live.
You
know
until
the
next
subgraph
upgrade,
so
it
could
be
on
the
order
of
you
know
weeks,
you
know,
or
even
days
in
some
cases,
and
then
we
have
the
outer
bonding
curve,
which
represents
the
sort
of
long-term
identity
of
the
subgraph
right.
D
What
they
do
in
a
nutshell
is
that
they
preserve
the
incentives
to
be
early
when
it
comes
to
signaling,
so
you
still
use
the
bonding
curve
to
decide
how
many
shares
to
mint
you
still
use
the
bank
or
formula
so
being
early
gets
you
more
shares
and
more
shares
means
more
share
of
curation
royalties.
You
know
more
query
fees,
however,
when
you
unsignal,
what
you
actually
get
back
is
your
cost
basis,
so
the
amount
that
you
initially
put
into
the
curve,
plus
any
accumulated
curation
royalties.
So
what
that
effectively
does?
D
Another
implication
of
this
and
that
I'll
get
to,
I
think
in
the
next
slide,
is
that
transferring
shares
in
that
bonding
curve
shares
become
less
fungible
in
the
sense
that,
with
each
balance
of
shares,
we're
also
tracking
a
cost
basis
that
determines
you
know
what
balance
you
you
get
back
when
you
unsignal.
D
It
requires
a
change
to
the
payout
logic
in
the
subgraph
deployment
bonding
curves,
because
we
can
no
longer
deposit
directly
into
the
resource
of
the
curves,
which
is
how
the
protocol
works
today
and
there's
a
lot
of
sort
of
things
that
are
both
enabled
and
required
at
the
gns
level,
and
all
this
stuff
is
laid
out
in
the
gip,
but
the
gns
basically
needs
to
be
cost
basis
aware
it
needs
to
be
aware
of
the
non-fungibility
of
those
shares
it
needs
to
pass
through
principal
protection
for
the
subgraph
owner.
D
That's
like
a
big
you
know.
Benefit
of
this
proposal
is
that
now
the
subgraph
owner,
no
longer,
you
know,
has
to
risk
their
their
capital.
When
using
the
network,
we
can
now
remove
some
protections
from
the
gns
level
that
existed
before,
so
things
like
upgrade
protections
and
slippage
protections
just
because
the
principal
protected
subgraph
deployment
curves
are
no
longer
susceptible
to
those
kinds
of
attacks
and
there's
some
things
that
are
uniquely
enabled
by
this
proposal
and
that's
kind
of
where
these
new
these
next
two
proposals
dovetail.
D
So
first
one
that's
enabled
by
this
is
decaying
curation
tanks,
and
this
is
just
a
one
slide
summary.
So
this
will
be
quick,
but
basically
the
motivation
for
the
curation
taxes
that
exist
today
is
that
indexers
must
be
upgraded
from
frequent
upgrades.
D
This
is
less
of
an
issue
in
the
high
gas
cost
environment,
but
it
you
know
it
was.
It
will
be
when
we
moved
to
l2-
and
you
know
it
was
when
we
were
first
designing
the
network
and
the
gas
environment
back
then,
but
effectively.
If,
if
a
subgraph
owner
upgrades
their
subgraph
too
frequently
they're
essentially
hurting
indexers,
because
now
indexers
don't
either
you
know
they
don't
get
to
earn
rewards
for.
D
You
know
a
certain
period
of
time
after
incurring
the
cost
of
syncing
the
subgraph,
fully
or
potentially
opening
an
allocation
right
like
for
indexers,
it's
better.
If
an
allocation
can
can
last
the
full
28
days,
otherwise,
they're
incurring
you
know,
multiples
on
on
the
gas
cost.
D
The
downside
of
the
curation
tax
is
that
it's
a
blunt
instrument
so,
whether
you
upgrade
frequently
or
infrequently,
you
still
pay
the
same
tax
and
it's
charged
on
deposit.
So
it's
basically
charged
when
you
signal
into
the
curve,
not
when
you
unsignal,
and
that
was
because
we
didn't
want
to
tax
curation
royalties.
D
The
nice
thing
about
the
previous
proposal
is
that
it's
actually
decoupled
the
way
that
we
track
on
signaling
and
and
the
rewards
that
you
get
when
you
unsignal,
and
so
now
we
actually
have
the
capacity
to
tax
on
withdrawal
from
the
bonding
curve,
which
means
that
we
can
actually
decrease
that
curation
tax
to
zero
over
time.
So
as
an
example,
it
could
be.
D
You
know
linearly,
let's
say
over
two
months:
you
know,
so
you
could
say
that
if
you've
left
the
subgraph
signaled
for
at
least
a
month,
then
you
you
know,
then
you
only
pay
half
the
curation
tax
would
be
how
that
works.
D
This
also
involves
new
tracking
and
and
another
dimension
of
sort
of
non-fungibility
in
the
curation
shares.
So
the
previous
proposal
introduces
cost
basis
tracking.
This
proposal
introduces
cost-weighted
time
basis
tracking
to
to
track
the
time
signaled.
So
it's
not
that
shares
are
completely
non-fungible
now
they're
still
erc
20
compliant.
It's
just
that.
D
There's
these
additional
metadata
you
could
think
of
as
being
tracked,
along
with
the
balances
that
that
you
could
interpret
as
like
a
degree
of
non-functionability,
but
this
should
reduce
costs
to
subgraph
developers,
which
has
been
a
big.
You
know
a
big
pain
point
in
general
of
using
the
network,
so
the
last
one,
that's
that's
really
enabled,
uniquely
by
principal,
protected
curves,
is
signal
renting.
D
So,
as
I
mentioned,
you
know
today
signaling
the
protocol,
because
we
we
bundled
together
the
subgraph
owner
and
the
financially
motivated
curators.
It
carries
principal
risk.
It
carries
balance
sheet
risk
that
makes
it
difficult
to
do
things
like
you
know,
rent
signal
to
to
sub-graph
developers,
because
there
would
be
a
chance
of
losing
the
principle
of
that.
You
know
amount
of
signal,
but
renting
is
something
that's
actually.
D
D
They
have
to
signal
in
a
bonding
curve
and
sometimes
on
the
order
of
thousands
of
grt
right
all
before
they
kind
of
receive
any
value
in
return
from
the
network,
that's
really
unfamiliar
to
developers
that
are
used
to
like
more
of
a
sas
like
billing
experience
where
you're
you're,
you
know,
sort
of
charged
monthly
and
so
signal
renting
is
exactly
that.
D
It's
it's
giving
us
the
ability
to
say
you
know,
pay
x,
amount
of
grt
a
month
to
rent
y
signal
as
a
subgraph
developer,
subgraph
owner,
and
we
can
do
that
in
a
fully
automated.
You
know
fashion,
where,
like
the
liquidity,
for
you
know,
renting
signal
is,
you
know,
is:
is
provisioned
in
like
a
liquidity
pool
in
these
rental
agreements
are
sort
of
automatically
automatically
issued.
D
So
this
is
something
that
howard
one
of
the
new
researchers,
that
edunode
will
actually
be
working
on
a
gip
for,
and,
as
I
said,
this,
this
also
depends
highly
on
the
gns
exposing
principal
protected
endpoints,
so
that
these
smart
contracts
that
implement
implement
the
signal
renting
can
lock
down
what
endpoints
can
be
called
to
only
the
principal
principal
protected
ones.
D
Okay,
so
the
next
bucket
of
gips
and
again
I
I
know
we're
we're
short
on
time,
so
I'll
try
to
move
through
this
five
minutes.
I
have
left
these
are
index
or
economics
improvements,
these
all
change
the
profitability
of
different
index
or
behavioral
profiles
in
in
different
ways
that
we
think
promotes
more
pro-social
and
rewards
active
and
honest
indexers
in
in
the
network
better.
So
the
first
bucket
here
is
force
closing
scale
allocations.
D
Basically,
you
know
stale
allocations.
You
know
by
quote-unquote
lazy
whales,
they
they
hurt
and
deter
active
indexers
from
joining
a
subgraph.
This
is
because
they're,
basically
not
claiming
indexing,
rewards
and
they're,
also
preventing
other
active
indexers
from
claiming
indexing
rewards.
So,
as
an
example,
you
know,
the
network
issuance
rate
of
grt
for
indexing
awards
is
three
percent.
In
2021
we
only
had
2.7
issued,
which
amounts
to
about
30
million
grt,
that
you
know
active
indexers
in
the
protocol
didn't
earn
because
of
these
kind
of
lazy
whale
profiles.
D
The
gip,
that's
that
sam
from
semiotic
and
ariel
from
metro
node
have
been
working
on,
is
to
allow
anyone
to
force
close
an
allocation,
and
we
expect
that
this
will
just
divert
more
indexing,
rewards
to
active
indexers
and,
by
extension,
improve
quality
of
service
on
subgraphs,
because,
as
we've
seen
from
some
of
the
other
analyses
that
that
prism
and
symbiotic
have
done
that
more
indexing
rewards
in
the
subgraph
leads
to
better,
more
more
distinct
indexers
and
by
extension,
quality
of
service.
D
And
one
note
here
is
that
we
we're
making
an
exception
for
so-called
zero
allocation
so
that
an
indexer
that
wants
to
like
not
collect
indexing,
rewards
but
still
announce
themselves
through
serving
query
fees.
They
can
do
so
and
they
can
let
those
those
allocations
run
run
long
next.
D
So
the
the
challenge
that
we're
trying
to
overcome
here
is
that
oftentimes
there's
not
an
incentive
to
settle
query
fees
because
the
gas
costs
of
settling
query
fees
exceeds
the
the
you
know
the
micropayments
that
have
accumulated
in
the
state
channel.
So
that's
why
you
know
if
you
look
at
like
the
graph
explorer
today,
you'll
see
a
lot
of
epochs
that
have
like
zero
query
fees.
D
It's
not
that
there
were
zero
query
fees,
it's
that
is
that
it
wasn't
economically
viable
to
to
settle
them,
and
so
indexing
rewards
were
always
intended
to
sort
of
subsidize
indexers,
not
just
indexing,
but
like
serving
queries
on
a
subgraph.
D
So
the
solution
here
is
that
basically,
indexing
rewards
would
be
tied
to
the
additional
transactions
required
for
collecting
query
fees
and
claiming
rebates
from
the
rebate
pool
and
it
basically
all
other
things
being
equal.
It
should
make
you
know,
indexers,
indifferent
between
settling
query
fees
and
not
selling
query
fees.
You
know
put
differently
if
positive.
If
query
fees
are
positive,
then
they
the
incentive
should
be
to
settle
them
for
any
none.
D
D
It
also
going
back
to
the
analysis
that
craig
did
it.
It
decreases
that
difference
between
that
c.
I
c
c,
o
term.
So,
basically,
it
diminishes
the
delta
between
like
an
honest
index
or
an
attacking
or
lazy
indexer.
D
This
will
give
us
a
more
dynamic
query
market,
but
the
effect
is
that
it's
also
going
to
lead
to
higher
fixed
costs
of
indexing,
a
subgraph
in
general
because
now
to
collect
indexing,
rewards
on
the
subgraph
you're,
not
just
submitting
the
pli
but
you're
also
submitting
to
collect
and
the
claim
transactions
as
part
of
that
and
the
last
one
here
is
stake.
Rebates.
This
gip
is
in
progress
still,
so
so
keep
an
eye
out
for
this
in
the
forum.
Soon,
this
goes
back
to
how
the
cobb
douglas
rebate
formula
works.
D
D
So,
if
you
serve,
you
know
x
percent
of
query
fees,
then
you,
the
optimal
behavior,
is
to
stake
x
percent
of
the
network
state.
However,
even
though
that's
the
long-term
equilibrium,
when
the
network
is
out
of
equilibrium,
the
current
mechanism
actually
benefits
active
indexers
at
the
expense
of
lazy,
quote-unquote,
lazy
whales,
which
here
I'm
just
defining
as
indexers
that
either
serve
zero
query
fees
or
far
less
query
fees
than
than
their
share
of
stake
in
the
network.
D
D
So
this
is
the
rebate
function
today,
where
you
basically
have
this
kind
of
weighted
geometric
mean
you
know,
based
on
the
this
left
term,
is
the
indexer's
share
of
fees,
and
this
is
their
share
of
stake,
and
this
is
all
the
stake
that
or
all
the
fees
rather
that
have
been
collected
into
the
rebate
pool,
and
you
can
see
that
the
net
rebates
that
people
receive
actually
are
sort
of
centered
around
zero.
D
So
a
bunch
of
indexers
actually
receive
a
positive
net
rebate
in
those
mechanisms
in
the
current
mechanism,
and
all
of
those
indexers
happen
to
be
ones
that
are
sort
of
doing
less
work
than
their
share
of
stake
would
imply
and
active
indexers.
The
ones
that
are
doing
more
work
than
their
share
of
stake
would
entail.
Are
the
ones
that
end
up
with
negative
rebates.
We've
heard
from
indexers
that
this
dynamic
is
really
harmful.
D
D
Both
of
these
components
of
the
the
formula
is
almost
being
symmetric
to
one
another.
So
like
an
indexer
that
would
have
been
positive
under
the
query.
Fee
rebate
will
be
negative
under
the
stake
rebate
and
vice
versa,
and
what
that
effectively
does?
Is
it
caps
the
net
rebates
at
zero?
So
there
will
be
no
indexers
that
are
sort
of
have
positive
net
rebates.
The
optimal
behavior
will
still
be
to
stake
and
serve
queries
in
equal
proportion.
That's
how
you
basically
get
to
the
zero
net
rebate.
D
Point
which
is
optimal,
but
importantly,
all
active
indexers
are
improved
under
this
new
mechanism.
So,
while
lazy
whales
are
incurring
a
new
cost
and
that's
what
actually
gets
rid
of
these,
these
sort
of
positive
rebates
to
the
right
of
the
zero
there,
some
of
that
is
distributed
in
this
mechanism.
You
can
think
of
it
as
going
to
active
indexers,
so
indexers
that
are
serving
query
fees.
You
know
proportional
amount
of
currencies
in
the
protocol
today
will
be
made
better
off
by
this
mechanism.
D
There's
a
observable
notebook
that
will
also
be
linked
in
the
gip.
That
kind
of
does
a
a
numerical
analysis
here.
D
So
if
you
want
to
see
how
these
histograms
were
were
generated,
so
the
next
steps
here,
because
these
all
three
sort
of
impacting
index
or
economics,
we
think
it
makes
sense
to
evaluate
them
as
a
bundle.
D
You
know
we
think
on
net,
like
stake,
rebates
will
benefit
some
indexers
quite
a
bit
while,
while
harming
or
adding
costs
to
to
sort
of
inactive-
or
you
know,
less
active
indexers
stale
allocations
is
a
net
win
again
for
active
indexers,
because
it's
going
to
you
know,
increase
the
amount
of
index
and
rewards
going
around,
whereas
subsidized
query
settlement
will
introduce
a
higher
fixed
cost
for
all
indexers
that
are
collecting
indexing
rewards
on
subgraphs.
D
So
we
think
there's
some
give
and
take
here
with
respect
to
the
you
know
the
impact
to
indexers,
we
think
on
net.
This
will
be
positive
for
for
active,
indexers
and
positive.
For
you
know
the
protocol
as
a
whole
so
stay
tuned
for
some
analysis
as
well
that
kind
of
kind
of
explores.
You
know
the
predicted
effects
of
all
three
of
these
great
and
I
don't
think
we
actually
have
any
time
for
q
a
but
feel
free
to
post.
Those
questions
in
the
forum.
A
Yeah
kai
just
asked
one
so
kai
might
might
be
best
to
just
yeah
as
one
dimension
just
post
it
on
the
forum
and
you
can
analyze
this
thing.
A
D
A
Good
thanks
guys,
thank
you
all
for
joining.
This
was
great
I'll,
see
you
on
the
28th
of
april.
Again,
that's
the
one
we
have
scheduled
if
you
want
to
follow
up
with
the
next
ones.
Again,
just
go
on
to
the
graph
dot
foundation.
You'll
see
an
ecosystem
calendar
link,
just
click
on
it,
you'll
get
access
to
the
calendar,
subscribe
and
yeah
so
that
you
don't
miss
any
of
these
upcoming
ones.