►
From YouTube: The Graph - Core Devs Meeting #13
Description
The Graph’s Core Devs Meeting #13
This video was recorded: Thursday, May 5 @ 8am PST, 2021.
The Graph's Media:
Twitter: https://twitter.com/graphprotocol?s=20
Instagram: https://instagram.com/graphprotocol
LinkedIn: https://www.linkedin.com/company/theg...
Website: https://thegraph.com
A
Okay,
let's
kick
it
off,
welcome
everyone
back
to
our
fourth
thirteenth
call.
This
time.
If
you
hear
some
context,
we
are
having
these
calls
with
court
contributors
every
month.
The
purposes
of
these
calls
are
for
core
contributors
to
present
what
they've
been
working
on
leverage
the
cross
functionality
introduced
by
the
different
working
groups
to
brainstorm
new
ideas
and
research
tracks,
and
also
get
community
feedback
on
some
of
these
work
streams.
A
I'll
paste
in
the
chat
and
show
notes
a
link
to
the
graph
sql
system
calendar
which
you
can
use
to
be
subscribed
to
future
calls
like
these,
as
well
as
community
talks,
which
are
now
being
taken
over
by
the
advocates
now
yeah.
So
before
we
start,
I
just
like
to
mention
two
amazing
events
that
are
coming
that
are
all
all
about:
web3,
it's
pretty
cool
graph
day
2nd
of
june.
That's
coming
it's
a
full
day
where
you'll
get
to
hear
from
leading
protocol
and
developers.
A
Building
this
space
you're
definitely
not
going
to
miss
it.
Wave
2
tickets
are
still
available,
so
you
can
head
on
to
the
graph.com
click.
The
graph
daily
link
on
top
you'll
find
it
there
at
the
menu
and
yeah
there
will
be
a
link
to
for
you
to
get
your
tickets
on
graphic.
This
is
a
a
three-day
event.
A
That's
following
up
by
graph
date:
it's
a
hackathon,
packed
with
amazing
workshops,
even
for
those
who
are
not
hacking
really,
and
but
if,
if
you
are
actually,
there
are
almost
400k
bounties
from
our
different
sponsors,
so
that's
a
nice
incentive
right
right,
right,
right,
right
there
and
if
you're
coming
you'll
always
have
a
chance
to
talk
to
graphs
core
developers
as
well
and
do
some
great
work
there.
A
Okay
about
this
call
today
I
do
have
some
topics
we
can
start
with
as
a
follow-up
to
the
latest
core
rnd
sync,
we'll
we'll
hear
from
craig
at
your
node
on
what
the
recent
gip
30
is
all
about.
I
believe
brandon
is
also
under
call,
so
we
can
get
some
some
context
here.
A
As
a
quick
recap,
this
is
the
resulting
work
of
edge,
the
node
and
data
team
and
the
prisms
analysis
of
the
the
critical
vulnerability
disclosed
in
the
protocol
audit
by
open
zeppelin,
which
given
which
we've
now
open
sourced-
and
I
can
also
share
some
links
here
next
from
the
next
experience
working
group,
we'll
have
annie
from
semiotic
presenting
the
the
recent
efforts
around
the
automated
allocation,
optimization
tool,
as
well
as
a
new
framework.
That's
brand
new
but
we'll
get
a
quick
intro.
A
I
think
on
graph
stem
this
might
be
of
interest
to
indexers,
but
rather
not
just
indexes,
but
everyone
else
wants
to
quickly
simulate
test
and
study,
any
more
modifications
to
the
protocols
mechanisms
and
then
to
wrap
up
we'll
finish
with
the
data
and
api's
working
group,
adam
walker's,
through
the
hardhat
plugin,
we
have
a
thread
on
on
our
forum.
I
can
also
click
it
there
for
some
context.
A
This
is
in
the
works
and
so
we'll
construct
some
updates
here,
as
well
as
the
block
oracle,
which
is
pretty
important
one.
That's
the
solution
that
might
unlock
or
that
will
unlock
multi-chain
indexing
rewards
on
the
network.
A
B
Sure
yeah
I'm
happy
to
tee
it
off
really
quickly.
I
think
you
did
a
great
overview
but
yeah,
as
pedro
said,
the
work
in
gip,
30
kind
of
started
over
a
year
ago,
with
the
open,
zeppelin
audits.
There
were
a
couple
unresolved
vulnerabilities
identified
in
that
audit,
that
required
economic
analysis,
not
technical
analysis
to
determine
whether
or
not
the
voter
abilities
were
legitimate.
B
The
foundation
gave
a
grant
to
prism
group
about
a
year
ago
to
take
a
classical
economic
approach
at
looking
at
these
vulnerabilities,
one
of
them,
which
was
called
delegator
front
running,
they
sort
of
identified
out
the
gates
that
the
protocol
wasn't
vulnerable
to
it.
B
From
an
economic
perspective,
the
other
one
which
was
called
poi
spoofing,
was
disclosed
like
a
month
or
so
ago,
and
that
did
have
a
recommendation
attached
to
it,
which
was
this
minimum
signal,
so
craig
ricky
and
the
data
science
team
at
eduno
did
a
really
great
analysis
to
kind
of
shed
light
on
some
of
the
pros
and
cons
and
also
figure
out
some
of
the
considerations
when
choosing
the
right
parameter
for
minimum
signal
and
I'll.
B
Let
craig
go
deeper
into
that,
but
one
thing
I
just
want
to
call
out
is
that
this
proposal
will
be
a
regression
for
some
subgraphs
in
the
network,
so
there'll
be
sub
drafts.
That,
in
theory,
are
eligible
for
indexing
rewards
today.
That
would
not
be
eligible
for
indexing
rewards
after
this
proposal
goes
into
effect,
which
would
mean
that
very
likely
any
indexers
on
those
subgraphs
would
stop
indexing
them.
B
We
believe
that
the
threshold
that
we've
chosen
minimizes
impacts
to
any
subgraphs
that
might
be
using
being
used
in
production,
because
this
is
a
security
recommendation.
This
isn't
something
that
we're
necessarily
looking
for
a
like,
yes
or
no
vote
on
like
based
on
sort
of
subjective
preferences,
but
we
did
want
to
disclose
this
before
the
upgrade
to
leave
enough
ample
time
for
comment
periods.
If
there's
any
concerns
that
should
be
incorporated
into
the
analysis.
B
C
Cool,
thank
you
brandon
yeah,
so
I
can
just
give
a
tldr
on
the
the
gip
initially
and
list
some
of
the
pros
and
cons
and
yeah.
If
we
have
a
little
time
afterwards,
we
can
open
it
up
for
questions
yeah.
So,
as
brendan
mentioned,
ricky
asks
lupone.
My
colleague
did
a
lot
of
interesting
work
that
did
a
parameter
search
over
the
parameters
identified
in
the
prism
model
as
influencing
the
profitability
of
this
attack.
C
So
for
background
on
the
attack,
yeah
indexers
could
theoretically
provide
spoofed
pois,
so
invalid
pois,
and
also
minimize
their
self-stake,
which
is
at
risk
of
slashing
and
maximize
their
self
delegation,
which
is
not
at
risk
of
slashing
and
theoretically,
it
is
possible
for
for
it
to
be
profitable
for
them
to
collect
indexing,
rewards.
C
You
know
the
indexing
rewards
would
still
exceed
the
the
expected
the
negative
expected
value
of
slashing,
so
they
ident
they
have
a
formal
model
that
has
a
number
of
parameters
like
slashing
risk,
slashing
rate
max
delegation
ratio,
etc.
C
And
so
we
did
a
parameter
search
over
all
the
values
of
those
and
identified
that
in
their
model,
the
the
biggest
factors
influencing
the
profitability
of
the
attack
were
the
different.
C
What's
the
difference
between
honest
indexer
costs
and
dishonest
indexer
costs,
so
honest
indexer
costs
are
things
that
you
know
are
specific
like
are
are
specific
to
people
who
are
actually
doing
the
work
of
indexing,
whereas
this
honest
index
or
cost
are,
are
you
know
you
can
think
of
it
as
the
cost
faced
by
both
both
honest
and
dishonest
indexers,
and
that
you
know
a
dishonest
indexer
would
have
these
costs
and
you
know
so
things
like
gas
costs,
and
you
know
the
minimum
hardware
cost
just
associated
with
submitting
a
poi
in
the
first
place
as
a
spoofed
poi.
C
So
honest
and
extra
cost
would
be,
like
you
know,
things
associated
with,
like
actually
indexing
subgraphs,
so
the
biggest
probably
the
biggest
insight
from
the
analysis
is
that
on
a
per
sub
graph
basis,
the
honest
indexer
costs
are
not
super
high
right
now,
because
you
can
serve
many
subgraphs
from
the
same
hardware.
So
on
a
per
sub
graph
basis.
C
The
the
infrastructure
costs
are
not
super
high
and
the
majority
of
cost
is
gas,
cost
from
allocation
opening,
closing
allocations
and
that's
based
by
both
honest
and
dishonest
indexers,
so
that
that
really
helps
us
in
coming
up
with
like
less
invasive,
like
values,
for
parameters
that
would
protect
against
this
attack,
and
in
particular,
we
thought
the
best
one
was
a
minimum
curation
signal,
and
the
idea
here
is
that
if
you
increase
the
curation
signal
that
is
going
to
make
it
rational
for
more
indexers
to
enter
and
pursue
the
indexing
reward,
the
indexing
rewards
will
be
higher
and
more
more
indexed
or
just
going
through
the
market
that
will
create
more
competition
and
bring
down
the
profitability
of
of
spoofing.
C
So
so
yeah
we
looked
at
parameters
that
would
satisfy
the
zero
profit
condition
that
that
prism
came
up
with
under
different
conditions
and
500
grt
seems
to
be
a
good
one.
That
would
that
would
protect
against
this
attack
and
some
you
know
pretty
adverse
conditions
so
yeah,
so
that
we
had.
We
had
some
charts
in
in
the
in
the
forum
post,
but
but
yeah
with
this,
you
know
with
500
grt.
C
It
meets
the
no
profit
condition
under
you
know
an
economic
situation
where
the
annual
opportunity
cost
capital
is
22
or
less,
and
you
know,
with
the
grt
like
a
very
conservative,
you
know
assumption
for
grt
price
at
an
all-time
low
of
10,
10
cents,
usd
and
yeah.
So
further.
You
know
if
we
were
to
increase
the
grt
price
or
decrease
the
opportunity
cost
of
capital.
Those
just
make
the
attack
less
viable
and
then
also
decrease.
C
You
know
the
minimum
signal
that
would
be
required
to
protect
against
it.
So
yeah,
that's
the
situation,
and
the
trade-offs
here
are,
of
course,
that
you
know
having
a
minimum.
Curation
signal
to
to
qualify
for
indexing
rewards
creates
more
friction.
It
reduces
the
likelihood
that
in
a
subgraph
would
be
picked
up
for
indexing
for
anything
with
less
than
500
grt
signals.
So
it
creates
a
little
bit
more
friction
for
subgraph
developers,
who
would
not
be
signaling
over
500
grp
a
year
wouldn't
be
able
to
track.
C
That
ricky
has
since
done
some
analysis
that
that
shows
that,
like
that's
a
that's,
a
very
small
number
of
of
of
subgraphs
and
as
far
as
we
know,
no
none
of
those
subgraphs
with
with
less
than
500
erp
are
being
used
for
production
traffic
that
that
could
be
wrong.
But
but
definitely
most
of
the
vast
majority
of
subgraphs
being
used
for
production
traffic
by
developers
have
have
over
500
grt.
So
it
would
not
be
yeah.
C
It
wouldn't
be
like
a
ton
of
subgraphs
affected
here
and
definitely
like
almost
surely
no
none
that
are
used
for
production
level
traffic.
So
it's
yeah
it's,
but
the
benefits
are
it's
a
best
effort
mechanism
to
mitigate
against
the
attack
identified,
and
I
think
you
know
the
the
even
bigger
benefit
you
know
perhaps,
is
that
we
were
able
to
open
source.
These
audits
give
the
community
more
information
about
the
you
know:
the
economic
vulnerabilities.
C
This
also,
you
know
just
increases
the
transparency
and
you
know,
gives
the
community
the
ability,
like
the
tool
to
address
this
attack
based
on
you
know
and
make
it
an
informed
decision
based
on
the
weighing
the
risks
and
the
the
the
trade-offs
here
so
yeah.
C
B
Yeah,
just
to
underline
one
of
the
last
points
that
craig
made
a
lot
of
d5
protocols
that
have
looked
at
incorporating
the
graph
like
integrating
with
the
graph.
This
has
been
like
one
of
their
biggest
asks
was
having
these
audits,
open
sourced
and
so
that's
been
kind
of
a
blocker
on
getting
like
some
of
the
d5
composability
benefits
of
you
know
the
graph
being
on
the
same
l1
with
all
these
other
d5
protocols.
So
that's
something
I
think
we're
really
excited
to
sort
of
unlock.
B
Another
call
to
action
here
is
like
this
may
feel,
like
a
really
small
outcome
for
about
a
year's
worth
of
like
analysis,
you
know
we're
just
basically
setting
this
minimum
signal
parameter
and
you
know
and
sort
of
you
know
now.
The
attack
is
generally
considered
mitigated,
but
I
would
emphasize
that
a
lot
of
really
good
sort
of
precursor
research
had
to
go
into
even
getting
to
the
point
of
being
able
to
do
this
analysis,
and
so,
even
if
you're
not
interested
in.
B
Like
this
specific
change,
I
highly
recommend
looking
at
the
posts
that
craig
and
pedro
published
to
the
forums
because
they
include
a
general
equilibrium
analysis
by
prism
of
like
the
entire
indexer
market
in
the
protocol.
Even
in
the
spoofing
analysis,
there's
some
like
really
interesting.
General
results
like
if
you
look
at
the
appendix
of
the
spoken
analysis,
there's
a
proof
that
relates
the
number
of
unique
indexers
on
a
sub
graph
is
kind
of
being
proportional
to
the
square
root
of
you
know.
B
Indexing
rewards
over
the
fixed
costs
of
indexing
and
we've
managed
to
sort
of
verify
that
both
empirically
and
in
simulations
that
that
relationship
seems
to
hold
so
a
lot
of
really
a
lot
of
really
good
nuggets
of
insight.
Even
just
in
the
process
of
getting
to
the
point
of
being
able
to
put
these
proposals
together
and
and
then
obviously,
there's
the
outcome
of
being
able
to.
You
know
plug
the
security
hole
and
sort
of
unblock
the
protocol.
For
some
of
these
d5
opportunities.
C
D
First
of
all,
I've
said
it
before,
but
the
the
the
level
of
rigor
in
this
analysis
has
been
really
enjoyable
to
read.
So
thank
you,
craig
and
thank
you
ricky
for
the
work
you've
done
on
this
just
one
one
question
which
is
sort
of
sort
of
ancillary
when
the
when
this
is
being
when
we
eventually
upgrade
for
this
change.
Are
we
doing
something
on
the
ux
side
as
well,
to
make
subgraph
developers
aware
of
this
minimum
requirement
if
they
want
indexing
services
from
the
get-go.
B
Yeah,
it's
a
great
point.
This
is
something
that
we
should.
I
mean
this
calls,
I
guess
part
of
that
process,
but
this
is
something
that
should
be
widely
communicated
out
to
the
various
staff
front
ends.
Obviously
the
you
know
the
that
front
end
that
node
manages
is
or
develops
is,
you
know,
is
aware
of
the
change.
B
Fortunately,
the
changes
is
backwards
compatible
from
a
technical
standpoint,
so,
like
none
of
the
interfaces
are
changing,
no
transactions
are
gonna
revert
that
previously
would
have
succeeded,
but
certainly
from
an
informational
standpoint,
we'll
want
to
make
sure
people
know
that
hey
if
you're
signaling
below
500
grt-
it's
not
worth
it.
I
think
we
also.
B
I
think
we
also
already
surfaced
at
least
in
edge
and
nodes
graph,
explorer
or
n
studio,
if
you're
not
mistaken,
whether
a
subgraph
has
been
like
disabled
for
indexing
rewards
due
to
the
oracle,
and
so
we
could
maybe
even
piggyback
on
some
of
those
like
informational
callouts,
that
like
hey,
because
this
is
below
the
signal
threshold.
It's
ineligible
for
for
indexing,
rewards.
A
A
So
the
the
other
topic
we
we've
had
was
graph
scheme
and
the
work
that
semiotic
has
been
doing
so
I'm
not
sure
which
one
you
want
to
get
started
with,
but
as
an
intro,
I
think
it
would
also
be
worth
worth
talking
about
the
dev
connect
event
and
the
workshop
on
incentive
mechanism,
validation
or
wimpy
as
we're
calling
it.
I
think
that
ties
in
nicely
with
the
efforts
you
guys
are
the
work
you
guys
are
doing
around
graph
sim
right.
E
Yeah,
that
sounds
sounds
like
a
good
plan.
Let
me
share
my
screen.
E
E
Yeah
morning,
everyone
so
just
gonna
chat
about
a
couple
topics
like
pedro
mentioned.
Firstly,
we'll
talk
about
the
dev
connect
event
and,
in
particular
the
workshop
on
intensive
mechanism,
validation
and
then
we'll
also
talk
about
very
briefly
about
graphsim,
which
is
a
simulation
effort
that
we're
putting
together
as
well
as
an
allocation,
optimization
tool
for
indexers
that
we've
been
working
on
cool.
E
So
the
workshop
that
we
put
together
in
amsterdam
was
basically
dedicated
to
the
practical
application
of
optimization
control,
theory
and
reinforcement,
learning,
just
basically
agent-based
modeling
techniques,
for
the
development
and
validation
of
ethereum
protocols
and
was
held
on
the
18th
of
april
as
part
of
dev
connect
in
amsterdam,
and
we
had
approximately
50
people
attending
and
there
was
a
good
mix
of
people
who
attended
in
terms
of
investors
and
founders,
builders,
students,
developers,
ml
people,
economics,
people,
so
so
quite
a
variety
there.
E
E
Although
they're
not
up
yet
as
for
my
understanding,
so
hopefully
those
should
be
up
soon
so
skipping
ahead
to
the
takeaways.
From
this
it
was
generally
a
good
networking
event,
but
some
of
the
feedback
that
we
got
were
that
there
were
just
too
many
talks,
especially
if
you
think
it
was
monday.
A
lot
of
people
flying
from
the
us
are
flying
from
australia,
east
asia,
india.
E
E
Also,
you
know
with
with
respect
to
that.
I
think
some
of
the
feedback
that
I
received
when
I
talked
to
people
was
that
the
breakout
sessions
that
we
had
as
part
of
the
second
workshop,
I'm
not
part
of
the
wimp
workshop,
we're
so
far
late
into
the
day
that
people
by
that
point,
retired
and
couldn't
really
actively
participate.
E
Just
because
again,
their
brains,
weren't
functioning
and
ariana
actually
brought
up
an
interesting
point
in
a
previous
discussion,
which
is
that
in
these
breakout
sessions,
people
came
up
to
him
and
basically
you
know.
After
seven
hours
of
us
chatting,
they
started
to
ask
questions
like
what's
a
subgraph
and
what's
an
indexer,
which
is
you
know,
that's
probably
something
we
should
do
a
better
job
of
explaining
upfront
so
that
there
aren't
those
questions
so
far
into
the
the
conversation.
E
There
were
also
a
lot
of
interesting
case
studies,
but
people
wanted
things
that
they
could
take
away
and
actually
implement
or
use
themselves,
and
a
couple
of
the
people
that
I
talked
to
felt
like
that
was
sort
of
lacking,
like
they
learned
quite
a
lot
about
problems
within
the
graph
of
problems
within
life
here.
But
they
didn't
know
how
to
take
that
and
apply
it
to
problems
within
their
own
domain.
Necessarily
so
we
can.
E
We
can
probably
improve
something
there
as
well,
and
then
there's
also
a
request
for
new
tools
to
enable
mechanism,
research
which
brings
us
to
graphsim,
which
is
a
new
technology
sort
of
that
we're
developing
where
we're
trying
to
containerize
the
entire
graph
protocol.
You
know
everything
from
the
index
or
agent
all
the
way
through
the
smart
contracts,
and
part
of
this
is
so
that
we
can
use
this
as
a
tool
to
test
changes
to
the
protocol
and
be
confident
more
confident
before
we
enact
them.
E
Part
of
it
is
also
so
that
we
can,
you
know,
test
new,
prs
and
stuff
like
that,
not
just
sort
of
mechanism,
changes
or
gips
or
something
but
just
prs
and
bug
fixes
and
see
whether
they'll
break
something,
but
there's
also
probably
some
use
to
the
broader
community
in
this
tool
in
terms
of
using
it
to
better
understand
what
they
should
be
doing
within
the
graph.
E
So
we're
still
in
the
very
early
stages
of
this,
so
nothing
concrete
to
share
with
you
guys
yet
but
yeah,
hopefully
we'll
have
more
updates
for
you
guys
on
that
in
the
coming
months,
and
I
think
that's
all
for
this
deck.
So
were
there
any
questions
on
that
before
I
move
on
to
the
allocation
tool.
B
I
just
wanted
to
add
on
the
the
workshop
in
dev
connect
that
one
thing
that
was
really
cool
about
this
is
you
know:
it's
been
a
few
years
thanks
to
kovitz,
since
a
lot
of
the
researchers
in
this
space
have
kind
of
gotten
together
in
person,
and
you
know
two
three
years
ago,
a
lot
of
these
different
ecosystems,
whether
it
was
live,
pier
or
you
know,
barnabas
at
the
ethereum's
robust
incentives
group
were
in
like
the
very,
very
early
stages
of
applying
these
excessive
techniques.
B
Like
optimization,
agent-based
modeling,
you
know
a
lot
of
the
stuff
that
semiotic
has
brought
into
the
graph
ecosystem,
and
it
was
really
really
amazing
to
see
that,
like
a
lot
of
really
talented
teams
had
converged
on
the
same
sets
of
best
practices
and
we're.
You
know
interested
in
exploring
the
same
sets
of
tools.
You
know
file
coins
crypto
econ
lab,
which
is,
I
think,
less
than
a
year
or
two
old.
I
was
also
there
presenting
on
their
simulations
and
their
their
experiments.
B
E
Definitely
I
I
totally
agree
with
all
of
that.
It
was,
I
think,
one
of
the
interesting
things
that
came
out
of
that
workshop
was
seeing
also
not
just
that
we've
converged
the
same
tools,
but
that
we
have
a
lot
of
the
same
problems
and
so
there's
there's
probably
a
lot
of
room
for
collaboration
or
more
open
discussions
between
all
these
different
protocols.
E
Any
other
comments
or
questions
before
we
move
on,
otherwise
I'm
happy
to
jump
forward
all
right.
Let's
do
it.
E
Cool
so
the
indexer
allocation
optimizer.
So
this
is
a
project
between
within
the
indexer
experience
working
group,
but
it's
primarily
being
worked
on
by
myself,
hopian
from
figment
and
howard
heaton
from
edge
of
node
and
we're
sort
of
at
a
stage
as
you'll
see.
I
I
wanted
like
more
concretely
define
what
we
mean
by
optimal,
because
there
are
some
caveats
to
that
word,
but
we're
also
not
at
a
stage
where
I
would
yet
recommend
this
to
indexers
in
production.
E
E
So
the
motivation
is
really
just
a
way
to
provide
indexers
with
the
tool
that
allows
them
to
allocate
optimally
with
respect
to
indexing,
rewards
right.
So
couple
things
to
note
on
this
slide
we
are
talking
about
indexing,
rewards
we're
not
considering
anything
about
query
fees
in
this
optimal
allocation
tool
and
then,
like
I
said,
we'll
also
have
to
dive
into
the
word
optimal
a
little
bit
to
make
sure
that
indexers
understand.
You
know
what
we
mean
by
optimal.
E
So,
just
a
a
brief
overview
for
people
who
haven't
seen
the
indexing
reward
before
and
also
an
introduction
to
the
notation
that
we're
using
so
an
indexer.
I
is
going
to
spend
gas
g
to
allocate
stake
omega.
I
j
out
of
its
total
stake
sigma.
I
stake
to
subgraph
j,
so
lots
of
lots
of
words
there.
If
you
ever
forget
the
notation,
let
me
know-
and
I
can
I
can
jump
back
real
quick
subgraph
j
has
a
psi
j
token
signaled
on
it
and
indexer.
I
receives
an
indexing
reward
r
sub.
E
I
based
on
its
allocations
and
the
issuance
and
it's
given
by
this
formula
here
where
essentially,
we
have
two
terms
right,
the
second
term,
the
the
phi
times.
The
psi
ratios
is
basically
the
issuance
times
the
ratio
of
how
much
signal
is
on
subgraph
j
versus
how
much
signal
is
on
all
subgraphs.
So
it's
basically
the
proportion
of
the
issuance
that's
on
a
particular
subgraph
and
then
the
first
term.
The
omega
ij
term
is
basically
how
much
index
your
I
has
allocated
to
that
subgraph
versus
how
much
all
other
indexers
have
allocated
to
that
subgraph.
E
Okay,
so
the
in
the
optimal
allocation
problem.
Essentially,
what
we
have
is
is
a
minimization
problem.
Technically,
it's
a
maximization
problem
right,
indexers
want
to
maximize
their
reward,
but
if
we
make
it
negative,
then
we
can
just
say
you
know
it's
a
minimization
problem
instead
and
that
allows
us
to
use
sort
of
normal
convex,
optimization
techniques.
E
So
we're
going
to
minimize
the
negative
index
reward
subject
to
the
constraints
that
the
sum
of
the
indexers
allocations
equals
their
stake.
What
this
does
mean
is
that
this
tool
will
allocate
all
of
an
index
or
stake.
It
won't
leave
a
single
grt
on
the
table
and
then
also
subject
to
the
constraint
that
the
allocations
themselves
must
be
positive,
which
is
just
you
know,
understandable
and,
like
I
said,
this
is
a
convex
optimization
problem.
So
it's
quite
nice
because
we
can
use
sort
of
these
normal
convex,
optimization,
optimization
techniques.
E
So
just
a
brief
introduction
to
convexity
in
case
you've
not
heard
that
term
before
so,
a
convex
function
is
just
a
function
that
has
one
global
minimum
and
sort
of
one
point
in
a
sense
where
the
the
slope
is
zero
and
so
to
find
the
minimum.
All
we
have
to
do
is
follow
the
slope
right,
so
intuitively
you
can
think
about
a
ball
rolling
down
a
curve
in
a
convex
function.
The
ball
is
guaranteed
to
end
up
stuck
in
the
same
place.
E
No
matter
where
you
start
from
in
a
non-convex
function,
the
ball
can
get
stuck
in
a
lot
of
different
places.
So
if
you
look
at
the
figure
on
the
right,
you
know
if
we
start
the
ball,
where
it
says
saddle
point,
the
ball
is
not
going
to
move
left
or
right.
E
It's
just
sort
of
going
to
stay
there,
because
there's
no
slope
to
drive
in
either
direction
or
if
we
start
on
the
far
left
of
that
curve,
it's
going
to
end
up
stuck
in
what's
the
local
minimum
instead
of
the
global
minimum
on
the
actual
on
the
on
the
far
right
of
that.
Whereas
if
you
look
at
the
convex
plot
on
the
left,
no
matter
where
you
start
you're
always
going
to
end
up
in
the
same
point.
E
Okay,
so
how
do
we
optimize
convex
functions
if
you
can
think
back
to
high
school
calculus
if
we
have
a
parabola
defined
by
y
equals
x
squared?
How
do
we
find
its
minimum?
Well,
it's
pretty
simple.
You
just
find
the
first
derivative
and
you
set
it
equal
to
zero
and
solve
for
x,
so
in
the
case
of
a
parabola
x,
equals
zero
or
in
the
case
of
this
particular
parabola
x,
equals
zero.
E
So
we
can
essentially
do
the
same
thing
with
our
minimization
problem
right.
So
we
have
this
convex
function.
We
can
minimize
it
out
of
the
box.
The
problem
is
we
have
a
couple
constraints
that
we
have
right.
If
I
hop
back
a
couple
slides
you
can
see,
we
have
the
sum
constraint
and
this
non-negativity
constraint,
so
we
have
to
find
a
way
to
take
these
constraints
into
account
when
we
optimize
as
well.
E
I
j-
and
this
gives
us
the
formula
that
you
see
here
so
roughly
speaking,
I
know
this
is
like
quite
a
lot
to
look
at,
so
this
formula
is
taking
into
account
not
just
the
signal
on
a
particular
subgraph,
but
also
how
much
other
indexers
have
allocated
to
that
sub
graph
and
the
amounts
and
how
it
takes.
The
other
indexers
into
account
is
a
function
of
the
square
root,
which
is
quite
interesting
as
well.
E
And
then
we
also
have
to
deal
with
the
non-negativity
constraints
right.
We
have
two
constraints,
some
constraint
and
non-negativity,
so
using
dualization
we
can
push
both
constraints
into
the
objective
function
and
then
using
the
kkt
conditions.
We
can
use
the
same
process
of
setting
the
first
derivative
equal
to
zero
to
solve
for
the
dual
variable
v
and
once
we
have
v,
we
can
use
that
to
solve
for
omega.
E
I
j-
and
this
is
actually
what
our
code
does
so
notice-
a
couple
things
about
this
optimization
problem
at
this
point,
we're
not
taking
into
account
in
this
part
of
the
optimization
anything
to
do
with
gas,
we're
not
taking
into
account
anything
to
do
with
index
or
preferences,
whether
they
would
prefer
to
reallocate
or
prefer
to
you
know,
hold
on
to
existing
allocations,
we're
not
taking
into
account.
E
You
know
any
anything
about
indexer's,
own
sort
of
operational
costs
or
things
like
how
much
time
it
will
take
to
sync
a
particular
subgraph.
This
optimization
is
purely
done
over
the
indexing
reward,
and
so
when
we
talk
about
optimization
in
this
context,
at
least
for
now,
this
tool
is
only
thinking
about
the
indexing
reward.
It's
not
considering
optimal
in
any
with
respect
to
anything
else.
E
So
with
that
in
mind,
how
do
we
incorporate
gas
costs,
so
we
do
expose
a
way
to
incorporate
gas
costs
and
basically,
what
that
will
do
is
it
will
do
the
normal,
optimization
problem
that
we
talked
about,
and
then
it
will
brute
force
a
solution
with
the
gas
cost.
So
we
can't
guarantee
optimality
with
that,
because
again,
it's
a
brute
force
thing.
It's
not
actually
based
on
any
sound
mathematics,
but
in
practice
it
does
seem
to
work
probably
like
to
within
five
percent
of.
E
What's
actually
optimal
would
be
a
pretty
safe
bet,
so
we
can't
guarantee
optimality,
but
in
practice
it
seems
to
work
well
in
the
future
going
forward.
We
do
actually
plan
to
push
the
gas
cost
into
the
actual,
optimization
problem.
So
then,
you
know
when
indexers
use
this
tool,
they
will
be
allocating
optimally
with
respect
to
gas
as
well,
okay,
so
how
to
use
this
tool.
So,
first
of
all,
the
tool
is
on
the
graph
protocols,
github
repository
under
allocation
opt.jl.
E
And
there
are
instructions
here,
you
know
we
also
have.
We
also
have
instructions
in
the
in
the
github
repo
as
well.
So
I'm
not
going
to
read
through
it
here.
I
guess
I
can.
I
can
quickly
demo
in
case
none
of
you
have
used
julia
before
all
you
have
to
do.
Is
you
open
up
the
julia
rebel
from
the
project
group?
E
So
you
can
see
I'm
in
allocation
opt.jl,
you
activate
the
environment
so
that
loads
all
of
the
packages
that
we're
using
and
then
you
say
using
allocation,
opt
which
is
the
allocation
of
package
and
then
the
function
that
we
expose
is
just
indexer,
optimize,
indexer,
sorry
and
I'll.
Just
use
one
that
I've
run
in
the
past
just
for
sake
of
demonstration,
but
that's
pretty
much
it
so.
For
now,
the
only
way
to
interact
with
this
tool
is
via
the
julia
reppel,
hopefully
fairly
soon.
E
We
plan
to
integrate
this
with
the
action
cue
that
ford's
building
and
that
will
provide
sort
of
a
nice
gooey
front
end
to
using
this
tool.
So
you
won't
have
to
deal
with
the
julia
rappel
at
that
point
in
time,
so
yeah.
This
is
sort
of
what
you
get.
I
won't
get
too
much
into
that
right
now
and
again,
just
as
a
warning,
we
don't
yet
recommend
using
this
tool
for
indexers
just
because
of
a
few
bugs.
E
So
next
steps
we
want
to
add
frontend
gui
via
the
action
queue
like
I
mentioned.
We
want
to
incorporate
more
tunable
index
or
preferences,
we're
going
to
start
explicitly
optimizing
over
the
gas
costs.
E
We're
also
given
some
advice
by
chris.
That's
you
know
the
option.
He
would
like
the
option
to
freeze
certain
allocations,
such
that
there
won't
be
considered
in
the
optimization
process
and
currently
the
optimizer
also
only
optimizes
at
a
fixed
point
in
time,
whereas
it
might
actually
be
beneficial
for
indexers,
for
example,
to
you
know,
not
allocate
some
grt
and
allocate
it
later
so
there's
some
time
domain
component
that
we're
not
yet
considering
the
optimization
problem
and
potentially
eventually
we'll
also
get
to
optimizing
over
time.
E
That's
it
for
me.
We
also
welcome
any
suggestions
from
indexers
or
anyone.
So
if
you
want
a
particular
feature
from
us,
just
let
us
know
submit
a
feature
request
on
the
repo,
but
that's
it
thanks.
If
there
are
any
questions,
I'm
happy
to
answer.
D
Hi,
I
have
a
question.
I
really
like
how
you
explain
the
tooling
and
how
much
thought
is
in
it.
I
would
love
to
have
a
benchmark
against
existing
a
location,
optimized,
optimization
tooling,
because
it
looks
like
you
put
a
lot
of
effort
into
it,
and
another
thing
we
encountered
by
using
the
allocation
tooling
developed
by
any
block
analytics
was
all
these
toolings
are
great,
but
in
practicality
we
need
to
also
optimize
for
syncing
times.
E
Cool
yeah-
those
are
both
great
points,
so
benchmarking,
I
think,
is
definitely
very
important
to
us.
We
haven't
formally
benchmarked,
yet
what
we
have
done
is
we've
taken
several
indexers,
just
randomly
sampled
from
the
graph
and
run
it
run
them
against
the
tool
and
seen
how
much
better
we
can
do
and
on
average,
we're
able
to
improve
sort
of
the
from
from
this,
like
sample
of
indexers.
E
We're
able
to
improve
their
indexing
rewards
by
something
around
like
50
to
80,
with
some
indexers
being
like
sub-optimal
enough
at
this
point,
where
we
can
do
closer
to
around
200
percent
increase
in
indexing
reward,
so
no
formal
benchmarking
yet,
but
those
are
sort
of
the
the
hand
waving
numbers.
I
can
give
you
through
our
like
small
tests,
with
respect
to
sync
times
I
yeah.
This
is
something
we
have
considered
and
it's
it's
something
that
we
want
to
incorporate
potentially
into
the
optimization
problem.
E
I
would
say,
for
now
one
of
the
things
that
we
actually
expose
in
the
optimizer
is
a
white
list
and
a
blacklist.
So
if
there
are
subgraphs
that
you
just
don't
care
to
sync,
adding
them
to
a
blacklist
as
a
like,
a
temporary
thing
would
be
sufficient
to
basically
say
you
know,
hey,
I
don't
even
want
to
consider
allocating
to
the
subgraph,
because
it'll
take
me
three
months
to
sync
so
that
sort
of
it's
it's
not
the
best
solution,
because
we're
not
actually
optimizing
there.
F
I
have
a
question
or
suggestion
you
might
consider
it
anyhow,
but
I
guess
your
program
will
actually
great
program.
Yes,
it's
really
nice
to
have
something
like
this,
but
I
guess
it
would
suggest
to
locate,
for
example,
one
jrt
for
a
subgraph,
where
no
any
other
indexers,
so
just
other
indexes
who
also
located
only
one
jrt
and
this
how
you'll
get
maximum
rewards
from
this.
But
still
anyone
can
enter
and
just
ruin
your
proportion
and
ruin
your
indexing
rewards
and
it
is.
F
It
would
be
a
great
thing
to
include
something
like
minimal
allocation
tokens
just
to
prevent
such
cases
or
to
allocate
something
that
would
be.
You
know
hard
to
overcome
for
other
indexers,
just
like
in
game
theory.
E
Yeah
that
that's
also
a
great
point,
so
this
is
something
we've
considered
as
well.
It's
actually
on
our
kanban
board
is
to
add
a
minimum
allocation.
So
it's
a
great
thought
with
respect
to
you
know
being
able
to
react
to
other
indexers
there.
I
sort
of
have
two
thoughts
here.
One
is
that
in
part,
this
is
talking
about
optimizing
over
time
right.
Do
you
want
to
be
able
to
do
sort
of
these?
E
E
E
E
If,
if
there's
some
opportunity,
because
some
other
indexer
has
moved
so
again,
it's
not
that's
not
a
perfect
solution
right
because
it's
not
truly
optimal
in
any
sense,
but
what
it
does
do
is
it
gives
you
the
ability
to
react
when
you're
no
longer
in
a
favorable
you're,
no
longer
allocated
favorable
favorably
to
yourself.
F
And
also
one
more
question
or
suggestion
again:
every
indexer
currently
has
some
allocations
on
subgraphs
and
they
question
whether
should
they
reallocate
them
for
better
proportion
for
to
optimize
it
or
keep
it
as
it
is
because
it
would
provide
more
profit
in
a
certain
amount
of
time,
for
example,
in
10
days.
Maybe
the
current
allocations
would
give
a
modularity,
because
you
will
spend
much
more
on
gas,
but
in,
for
example,
20
days,
the
new
more
optimized
solution
would
provide
more
charity.
E
Yeah
great
point,
so
one
of
the
things
that
hope
has
implemented
is
something
that
we
call
the
well
she's
implemented
two
things,
but
one
of
them
is
the
preference
ratio,
and
if
you
specify
the
preference
ratio,
when
you
use
the
optimize
indexer
function,
essentially
what
it
tunes
is
your
willingness
to
reallocate
versus
how
much
you
want
to
just
hold
on
to
existing
allocations.
E
E
But
definitely
we
want
to
be
able
to
support.
You
know.
We
understand
that
you
know
optimal.
We're
dealing
with
optimality
in
a
sandbox
indexers
have
a
much
wider
perspective,
and
so
we
want
to
be
able
to
support
that.
Indexers
can
use
their
preferences
to
push
the
tool
in
one
direction
or
another,
even
if
it
brings
us
further
away
from
optimality.
F
Yeah,
thank
you
for
your
answers.
I'm
sure
I'll
be
waiting
for
all
these
plans
and
improvements.
Thank
you.
Thank
you.
A
Yeah
thanks
annie
guys.
If,
if
you
have
more
questions,
maybe
you
can
follow
up
in
the
forum
in
the
interest
of
time
we
should
move
on.
I
guess
so
adam
I
think
adam
is
a
nice
demo
for
us
as
well.
G
A
nice
demo,
it
remains
to
be
seen
yeah.
I
guess
like
I
guess
there
were
two
things
that
were
on
the
list
today.
So
one
was
the
epoch
block
oracle.
So
we
spoke
about
this.
I
think
I
think
a
couple
months
ago,
this
evo
block
oracle
is
maybe
a
slightly
cryptic
cryptic
title.
Essentially
it
is
a
a
key
sort
of
piece
of
protocol
sort
of
infrastructure
that
will
allow
like
enable
indexing
of
not
just
ethereum
mainnet
subgraphs
on
the
network.
G
So
it's
really
providing
that
like
key
piece
of
information
which
is
like
which
block?
Should
I
be
closing
my
allocations
against
for
networks
that
aren't
ethereum,
which
is
that
made
like,
like
the
major
major
blocker,
so
yeah,
so
the
gip
was
posted,
the
gip
is
actually
slightly
behind
some
of
the
implementation,
which
has
been
going
on,
which
is
which
is
great.
G
There's
there's
been
a
lot
of
work
like
across
a
few
fronts,
both
from
the
data
edge
side,
so
exactly
being
had
a
separate
joke
beyond
what
a
data
edge
is
basically
a
really
efficient
way
to
store
data
on
chain
and
then
have
that
the
execution
of
chain
that's
being
manifest
for
the
first
time
in
in
the
block
oracle.
G
It's
a
great
work
from
a
whole
bunch
of
people,
including
one
who's,
who's
been
turning
cool
data
on
chain
into
into
a
meaningful
subgraph
and
then
fliver
and
thiago
from
engine.
I'd
have
also
been
working
on
the
more
like
operational
like
what
would
actually
take
to
have
that
thing
running
on
an
ongoing
basis,
which
I
think
is
often
like
the
detail
that
can
get
lost
when
it's
coming
to
these,
like
protocol
facing
jerps
of
like
like,
what's
up
like
like
what
are
the
operational
impacts
of
running
this.
G
So
I'd
encourage
everyone
to
like
take
a
take.
A
look
at
that,
obviously
being
very
like
whether
there's
a
lot
of
work
going
on
as
we
as
we
speak,
but
certainly
be
good
to
get
get
different
people's
perspectives.
We
obviously
made
some
trade-offs
in
in
the
design
of
that
gop
in
the
overall
system,
but
we're
super
excited
because
it
does
unlock
such
enormous
potential
value
in
having
not
just
maintenance
over
us
on
the
network.
G
So
I've
always
there-
and
I
guess
the
other
the
other
thing
and
like
unless
there
are
any
specific
questions
about
the
epoch
block
oracle.
I
did
the
classic
thing
of
posting
it
with
only
a
couple
hours
before
the
meeting,
so
nobody
could
read
my
homework
before
before
we
showed
up
so
sorry
about
that.
A
Yeah,
I
I
just
posted
the
link
to
the
forum
as
well,
where
you
have
some
nice
discussion
and
what
I'll
do
is
I'll
just
share.
Also
your
intro
you've
done
in
the
previous
quarter
call.
I
think
that
was
the
call
number
10
so
I'll,
just
post
a
link
here
as
well.
If
folks
want
to
give
up,
I
just
want
to
watch
a
quick
intro.
G
Cool
awesome,
then,
I
think
the
other
thing
on
the
agenda
was
kind
of.
At
the
other
end,
I
guess
of
graph
nodes
sort
of
considerations,
obviously
the
network
considerations,
but
it
was
about
the
graph
client,
which
is
really
the
sort
of
developer
experience
developer,
like
yeah,
devex,
sort
of
focus
of
graph
node,
and-
and
this
is
an
area
where
we've
also
like,
I
think,
we're
making
a
bunch
of
improvements.
G
I
think
the
guild
had
presented
recently,
obviously
working
on
graph
client
graph,
hard
hat
is
a
similar
thing,
which
we
hope
will
make.
Developers
lives
easier,
and
I
also
so
I
kind
of
just
wanted
to
demo
a
couple
of
things
so
caveat,
always
with
demos.
What
I
kind
of
want
to
highlight
here
is
just
some
of
the
developer
experience
things
that
are
already
happening.
Something
like
like
the
reason
why
I
have
plugin
might
be
really
useful
and
yeah,
maybe
it'll
all
fall
over.
So
let's
see
how
we
go.
G
So
what
am
I
running
here?
If
I
give
myself
a
chance
to
see
anything
perfect,
so
I'm
running
the
latest
build
of
master
I'll
talk
about
white
like
why
that
is
in
a
second
I'm
running,
a
local,
a
local
chain
and
I'm
running
a
local
app
just
just
because
if
I
move
this
up
here,
that's
going
to
give
me
the
best
chance
to
make
everything
smaller.
G
So
I
guess
the
first
thing
I
want
to
kind
of
highlight
is
is
is
a
new
feature
which
has
been
really
long,
long
requested,
which
is
orthogonal
to
the
graphic
hardhat
client.
But
it's
like
it's
effectively
the
the
fact
that
you
can
like,
as
of
the
latest,
build
and
not
yet
released,
but
coming
soon
you
can
access
information
about
receipt
during
a
mapping
as
well
as
like,
as
well
as
as
well
as
information
about
the
transaction.
G
So
if
we
jump
to
here,
that's
the
default
block
oracle.
Previously
you
have
the
transaction
information,
but
you
didn't
have
like
access
to
the
receipt
which
has
a
load
more
information
in
it,
and
this
is
really
good
where
you
can't
so.
The
receipt
essentially
is
servicing
more
information
for
developers,
the
sort
of
like
information
this
includes
is
like
most
saliently,
the
gas
used
on
a
given
on
a
given
transaction,
which
wasn't
available.
G
A
G
A
A
D
A
A
G
We're
essentially
more
information
like
most
certainly
there's
logs,
which
essentially
super
relevant
for
nfts,
who
maybe
want
to
track
a
sale,
and
so
they
want
to
know
the
other
things
that
have
happened
in
the
same
transaction.
So
it's
unlocking
more
functionality
in
terms
of
what
it
takes
to
enable
that
on
a
sub
graph,
it's
pretty
simple,
essentially
on
the
event
handler,
you
actually
just
say
receipt.
True.
G
The
reason
why
we're
not
passing
receipts
to
everything
is
that
there
is
a
an
additional
rpc
rpc
called
to
fetch
this
information,
but
then,
once
you've
got
that,
then
within
your
source
code
you
can
reach.
You
can
get
the
information
about
receipt
to
and
and
save
it.
So
here
we're
essentially
adding
the
guess
used
for,
like
a
purpose-type
transaction.
The
broader
thing
I
wanted
to
kind
of
demo
demo
here
so
we'll
we'll
like
have
a
look
at
that
in
action.
I
think
you
can
kind
of
see
it
in
this.
G
G
Maybe
now,
if
we
give
it
a
second
yeah,
so
that
one
slightly
different,
I
suppose
it's
different
gas.
G
This
is
a
new
functionality
which
we're
like
we're
excited
to
see
like
there
are
some
nuances
and
accessing
stuff,
but
yeah
I
know
simon
has
like
has
been
keen
for
this
for
a
while,
so
we're
excited
to
have
it
out
there
on
the
hosted
service
in
the
latest,
like
alpha
version
of
graphql
graphic
line
anyway.
Oh
that's
my
music
fantastic.
G
So
then,
the
next
thing
that
I
want
to
highlight
is
the
essentially
like
the
overall
workflow
and
unconscious
of
time.
So,
if
folks
want
to
drop
off
no
hard
feelings,
I
want
to
kind
of
demonstrate
and
the
workflow
that
we
want
to
really
improve
with
hard
at
graph
plugin.
So,
firstly
like
if
I'm
developing
a
contract,
I
might
essentially
like
change
something
here.
So
maybe
I
want
to
not
have
this
log
that
should
actually
reduce
the
gas
cost,
the
gas
price
for
this
for
this
transaction.
G
Sub
graph
so
give
that
a
second
it'll
compile
so
that's
deploy
that's
deployed
to
my
local
network
and
so
a
few
things
here.
So
firstly,
like
I'm
working
in
this
hard
hat
repository
here,
one
of
the
key
things
is
that
I've
deployed
a
new
contract,
so
I've
got
a
new
address.
I'm
passing
that
like
into
like
into
my
subgraph,
is,
is
a
bit
of
a
hassle
like
copy
pasting
between
those
things.
Isn't
isn't
that
ergonomic
for
a
developer?
G
G
So
if
you
essentially
update
this,
then
you
can
update
the
yaml
files,
so
it
sort
of
like
simplifies
that
loop,
so
the
the
current
work
around
that's
in
my
local
development
setup
is
that
when
I
deploy
I've
got
a
post
deployment
script
that
updates
this
file,
the
hardhat
deployment,
the
hardhat
graph
plug-in,
could
have
sent
like
essentially
or
like
automatically
do
that
it'd
be
sort
of
aware
of
where
it
needs
to
push
like
push.
G
The
new
contract
information
and
and
where
it
should
go,
and
so
now,
if
we
go
back
here
and
if
I
go
and
I
I
think
it
should
just
actually
be
build
network
localhost.
Hopefully
this
9ef
thing
should
jump.
G
Into
the
sub
graph
yeah,
so
you
can
see
that
this
address
was
then
updated.
So
that
was
because,
if
here
you
can
see
that
I've
passed
this
flag
saying
network
localhost,
so
it
knew
I'm
certain
you
to
use
the
localhost
there's
only
one
network
here.
If
I
had
more
time,
I
was
going
to
deploy
to
covan
and
then
like
do
the
automatic
update,
but
you
kind
of
get
the
idea.
The
the
hardhat
plugin
will
then
make
that
even
tighter.
G
So
so
you
don't
so
developers,
don't
have
to
do
that
sort
of
plumbing
themselves,
so
it
can
like
really
streamline
the
developer
experience.
If
I'm,
then,
if
I
then
deploy
this
again,
I
guess
here
giving
this
a
chance.
G
The
file
that
then
deploys
to
my
local
graph
node,
you
can
see
it
starts
indexing.
Now.
If
I
refresh
this
one.
Firstly,
we
go
here
purposes
and
look
at
ide
purpose.
Gas
used
there's
nothing
yet
because
I
haven't
actually
made
any
transactions.
You
won.
G
Now
you
can
see
that
if
I
do
this,
that
first,
yes
user-
I
think
that
was
actually
a
bit
cheaper
than
before.
If
you've
got
a
memory
for
like
precise
numbers
but-
and
I
think
that's
because
we
reduced
the
lightroom
with
the
console.log
file
so
anyway,
that's
hopefully
giving
you
a
sense
of
the
developer
workflow
that
we're
trying
to
improve
and
make
more
streamlined.
I'm
highlighting
this
new
feature
yeah.
I
think
this
was
great
work
by
thiago.
G
I'm
so
excited
to
see
that
and
something
so
to
see
what
people
build,
but
yeah
constant
people's
time.
We're
a
couple
of
minutes
over
I'm
happy
to
hang
around
and
answer
questions,
but
also
wanna.
Let
people
head
off
if
they've
got
five
o'clock.
G
You've
gotta
use
the
you've
got
to
use
the
alpha.
The
alpha
version
of
graph
ts
and
graph
cli.
D
Yeah
because
I
already
kind
of
showed
this
to
another
developer,
I
think
two
weeks
ago
and
it
was
not
possible
yet.
G
Think
there's
some
nuance
on
how
you
like
how
you
decode
the
logs
for
the
other
lock
for
the
other
events
which
we
talked
about,
but
I'm
sure
eva
will
be
happy
to
help
us
with
that
sort
of
particular
api
decoding
challenge.
A
Pretty
cool
nice
thanks,
adam
yeah,
also
shout
out
to
lyman
chain
who's,
been
leading
this
work
on
the
plugin
development
itself,
all
right
cool.
So
it's
going
to
be
recorded.
You
can
watch
this
at
a
slower
pace,
maybe
on
youtube.
If
you
want
to-
and
I
guess
we
can
wrap
it
up
here.
Thank
you
all
for
joining
I'll,
see
you
on
the
next
one.
It's
going
to
be
on
the
26th
of
may
again
do
subscribe
to
the
our
ecosystem
calendar
just
head
on
to
the
graph
dot
foundation.
You'll
see
an
ecosystem
counter
link
just
subscribe.