►
From YouTube: The Graph's Core Devs Meeting #8
Description
The Graph’s Core Devs Meeting #8
This video was recorded: Tuesday, October 5 @ 8am PST, 2021.
Follow The Graph on social media
Twitter: https://twitter.com/graphprotocol?s=20
Instagram: https://instagram.com/graphprotocol
LinkedIn: https://www.linkedin.com/company/thegraph/
GitHub: https://github.com/graphprotocol
Website: https://thegraph.com
A
Welcome
everyone
to
another
core
devs
talk.
I
believe
this
is
our
eighth
call
already
we're
happy
to
have
you,
obviously,
if
you're
new,
if
you're
new
here
this
is
a
place
where
we
talk
about
and
discuss
latest
updates
within
the
product
products
and
protocol,
as
well
as
upcoming
changes.
A
Here,
you
will
hear
directly
from
core
devs
how
ongoing
work
is
moving
forward.
The
current
major
engineering
struggles,
major
breakthroughs,
while
also
understanding
the
current
focus
areas,
what
they
really
are
at
the
moment.
A
So
we
try
to
keep
this
conversational
while
proposing
topics
to
be
artists
by
discord
devs
to
sparkle
some
discussion.
We
have
some
ideas
for
today.
I
think
it's
going
to
be
quite
interesting.
We
expect
you
to
engage
in
these
conversations
as
well,
so
feel
free
to
use
the
shot
to
ask
questions.
A
Yeah,
these
calls
are
always
recorded
as
well,
so
you
can
later
check
them
out
on
youtube.
We
do
upload
them
a
couple
hours
afterwards,
so
I
believe
we
have
a
couple
updates
and
we
might
want
to
talk
about.
I
think
it's
a
cool,
interesting
topic
to
raise.
We
have
some
gips
coming.
The
quarterbacks
have
been
talking
about
a
few
things
we
want
to
post
on
the
forum.
B
A
Greater
visibility
and
to
have
you
guys,
feedback
on
it
as
well.
I
think
we
should
start
there
yeah,
so
it's
going
to
be
a
process.
We
that's
the
overarching
goal,
really
is
for
us
to
to
start
using
the
gip
process.
More
often,
so
you
guys
are
aware
of
the
things
you
want
to
work
on,
get
your
feedback
on
it
and
also
use
that
to
update
you
on
on
on
on
the
latest
yeah
how
things
are
moving
forward
really.
A
C
And
sure
yeah,
I
don't
see
adam
right
now.
He
may
be
on.
C
Cool
so
yeah,
so
we're
preparing
a
bunch
of
gips
for
different
things
and
some
issues
with
the
current.
The
current
network,
software
and
others
are
new
features,
and
so
one
that
I've
worked
on
so
far
and
that's
kind
of
gone
through
internal
review
and
the
next
step
is
to
to
share
it
properly
post
it
on
the
forum
and
then
also
discuss
it
among
the
core.
C
Devs
is
about
situations
where
indexers
try
to
handle
a
query,
try
to
process
the
query,
but
can't
return
a
result
or
don't
have
the
confidence
in
returning
a
result,
maybe
because
they
have
a
corrupt
database
and
right
now
graph.
Note
basically
spills
out
an
error,
but
it
looks
like
any
other
query
error
like
a
syntax
error
in
the
query
or
like
a
schema
mismatch
and
so
right
now
you
can.
C
We
can't
tell
these
these
things
apart
and
so
any
any
time
indexer
fails
because
of
some
internal
problem
with
their
own
infrastructure
and
the
the
clients
or
right
now
the
gateway
treats
us
as
a
successful
result
and
returns
it
back
to
the
the
consumer,
and
that
is
a
problem
because
really
what
we
want
to
do
in
situations
where
we
get
these
like
non-deterministic
results
that
vary
from
indexer
to
indexer.
C
We
want
to
try
other
indexes
to
see
if
they
have
the
same
problem
or
if
they
can
return
a
successful
response
that
actually
has
actual
data,
and
that
does
not
have
you
know
problems
due
to,
for
instance,
database
corruption,
so
they
require
a
few
changes
in
a
few
places.
C
Graph
note
in
excess
service
and
also
gateway
software
and
gfp,
for
that
is
yes,
almost
ready
just
need
to
put
some
finishing
touches
on
that,
and
that
will
then
allow
us
to
handle
these
things
more
gracefully,
also
remove
the
potential
slashing
risk
of
an
energy
returning.
You
know
a
result
with
some
internal
error
that
just
doesn't
doesn't
make
much
sense.
C
There
shouldn't
be
at
risk
of
slashing
and
also
an
extension
of
that
could
be,
for
instance,
in
the
future.
If
an
indexer
uses,
let's
say
an
ethereum
node
implementation
or
some
other
blockchain
node.
C
That
is
known
to
have
a
certain
bug
that
affects
certain
subgraphs
and
they
might
want
to
bail
out
of
certain
queries
for
that
subgraph
as
soon
as
possible,
even
before
they've
stopped
allocating
towards
the
subgraph
et
cetera.
So
there's
there's
more
more
discussions
to
be
had
about
that
kind
of
extension,
but
it's
been
brought
up
and
it
could
be
interesting
adam.
Do
you
want
to
briefly
cover
the
things
that
you
were
working
on
or
should
I
would
you
better
equip
to
do
it.
D
Sure
yet
so
there
are
three
areas
of
sort
of
sort
of
draft
gop.
So
if
I
can
share
more
broadly
upcoming,
like
a
like
two
of
them
are
areas
that
I
think
we've
talked
quite
a
lot
about
before,
and
there
is
some
prior
art
actually
in
all
the
areas.
But
so
so
it's
referred
to
as
subgraph
composition,
which
is
essentially
a
pattern
for
essentially
leveraging
other
subgraphs
within
within
your
subgraph.
So
that's
like,
hopefully,
reducing
sort
of
duplicate
work
across
subgraphs.
D
It's
it's
creating
functionality
where
that
sort
of
query
time
subgraphs
can
reference
entities
and
entities
from
other
subgraphs,
and
this,
like
there's
been
quite
a
long-standing
like
long-standing
feature,
request
that
unlocks
like
a
bunch
of
a
bunch
of
potential
things,
everything
from
like
slightly
more
parallel
execution
in
in
a
kind
of
way,
but
also
yeah,
as
I
say,
meaning
that
popular
subgraphs
that
are
like
commonly
extended
can
just
be
directly
leveraged
rather
than
having
to
copy
the
code
into
your
subgraph
to
to
make
the
best
use
of
them.
D
D
So
this
is
the
idea
that
a
subgraph
might
be
triggered
to
run
handlers,
not
just
on
the
basis
of
events
on
chain,
but
also
on
the
basis
of
updates
to
entities
from
the
same
subgraph
or
other
subgraphs,
which
creates
a
bit
more
of
a
sort
of
pipelining
chain
of
of
of
processing,
which
maybe
makes
execution
a
bit
more
atomic
makes
it
a
bit
easier
to
reason
about
what
you're
doing
within
a
given
within
a
given
subgraph
and
so
and
could
also
be
done
sort
of
across
subgraphs
in
the
same
way.
D
So
again
going
from
these
sort
of
really
isolated
atomic
subgraphs
to
something
that
could
be
sort
of
building
on
like
on
different
on
different
subgraphs.
So
those
are
both
like
quite
substantial
changes
in
terms
of
what
might
what
might
be
available
to
sub
graph
authors
and
then
the
third.
The
third
thing
is
is
actually
more
a
revisiting
of
the
ipf
ipfs
sort
of
functionality.
D
That's
currently
that
currently
exists
within
graph
node.
So
at
the
moment
fetching
files
from
ibs
ipfs
happens
within
mappings,
and
there
are
all
sorts
of
reasons.
Why
and
that's
problematic,
both
from
a
sort
of
performance
and
sort
of
reliability
and
throughput
perspective,
but
also
from
a
determinism
perspective.
D
You
can't
deterministically
say
whether
at
the
moment
whether
a
given
file
is
available
or
not
so
it
makes
sort
of
poi
and
and
indexing
determinism
sort
of
impossible
with
the
current
architecture
and
there's
a
bit
of
work
to
separate
out
fetching
those
files
from
ipfs
doing
that
a
bit
more
asynchronously
and
also
preparing
for
essentially
integration
with
a
surveillance
availability
chain.
D
So
that's
something
which
can
help
us
move
ipfs
away
from
being
non-deterministic
as
it
currently
is
to
being
essentially
something
that
could
be
deterministically
supported
on
the
on
the
network.
So
so
a
few,
quite
big
pros
and
yeah
looking
forward
to
sharing
them
in
the
coming
days
and
weeks
and
yeah
any
any
questions
about
those
or
if
you
want
to
shout
offline
about
those
things
right.
Very
happy
to
do
so.
A
A
So
continue
on
this
trend,
I
see
alex
also
joined
matt
from
streamingfast.
Maybe
we
can
update
and
check
the
progress
on
how
multi-blockchain
really
is.
We
know
that,
back
in
february,
we've
announced
we'd
be
adding
support
for
additional
blockchains
like
near
polka,
dots,
alana
cello,
while
also
exploring
other
such
as
cosmos,
binance,
marching
avalanche
and
and
others.
I
believe
all
these
are
progressing.
A
I
think
I
think
today
we
can
already
check
on
how
things
are
with
near.
I
believe
mod
can
already
do.
A
quick
demo
would
be
great
to
to
see
how
things
are
so
yeah
alex.
E
Excuse
matt
had
an
issue
with
his
house
he's
building
a
house
a
few
weeks
back,
his
his
roof
fell
off
and
water
went
down
in
the
drain
and
he
had
an
issue
right
now,
so
he
won't
be
able
to
join
us
for
the
demo
right
now.
It's
not
the
same
thing.
It's
not
the
whole
house,
but
he
had
enough
issues
with
his
ass
recently.
So
I
I
won't
be
able
to
run
the
demo
for
near
right
now.
E
So
maybe
I
can
tell
you
what
the
demo
could
have
been
had
he
had
even
there.
I
don't
have
it
set
up
on
my
machine
here,
so
we
have
for
near
running
a
near
instrument
to
the
node
there
using
the
indexer
framework,
that's
piping
out
data
in
the
in
the
firehose
fashion
and
and
then
going
through.
You
know
triggers
mapping
and
hitting
mappers.
E
So
you
know,
assembly
script
functions
for
the
on
block
method,
so
that
runs
sort
of
end
to
end
and
he
could
have
shown
that
to
you
and
and
stores
entities
like
in
in
the
graph
node
using
the
fire
hose.
You
know
the
new
fire
hose
source
that
lives
in
the
graph
node,
so
it
would
have
been
much
crunchier
with
a.
A
E
A
E
E
Working
on,
like
maybe
the
team
there
deciding
on
how
we
split
the
work
there
to
map
more
triggers
because
right
now
this
is
you
have
the
on
block,
which
is
you
know,
relatively
use,
useful,
and
so
once
we
get
that
we'll
get
more,
you
know
potential
use
cases
covered,
and
so
we
can
discuss
that.
Maybe
a
little
later.
A
E
Right
so
we're
still
figuring
out,
you
know
the
the
piece
of
work
each
takes
and
sort
of,
so
we
we
done
some
work
on
the
fire
hose
side
and
bringing
a
few
of
the
you
know
the
main
entry
point
of
of
the
data.
Now
there's
the
trigger
work.
We
want
to
find
out
who's
going
to
be
the
best
person
to
do
that.
E
I
think
it
was
a
little
bit
in
the
edge
and
those
aside,
but
we
will
be
happy
to
you
know
tackle
more
things
if,
if
we
can
speed
things
up
near
has
not
moved
that
much
in
the
past
few
days,
so
you
know
maybe
there's
an
opportunity
for
us
to
to
to
take
on
some
load.
I
know
you
guys
are
pretty
loaded
there,
so
you
know
that's
the
state,
but
otherwise
the
collaboration.
E
We
have
a
good
relation
and
you
know
that's
for
the
public
right,
so
we're
happy
digging
into
those
those
things
together
and
I
think
we're
gonna
have
a
lot
of
fun,
also
jumping
on
other
chains.
Eventually,
we
have
an
update
on
solana
sometime.
C
Yeah,
maybe
we
could
speak
a
little
bit
to
our
side
of
the
integration,
so
I
know
that
the
near
changes
for
graph
node
have
been
merged
were
merged.
Yesterday.
I
think
there's
some
some
fixes
in
a
pull
request
open
today
and
we're
also
working
on
the
the
graph
ts
type
definitions
for
nia
so
that
you
have
access
to.
You
know
certain
near
data
structures
like
the
block.
C
I
think
it's
called
a
receipt
which
is
kind
of
the
equivalent
I
think
to
maybe
to
an
ethereum
call
payload,
something
like
that.
So
that'll
be
the
next
kind
of
trigger,
and
you
tell
that
I'm
not
deep
in
in
the
integration
work
myself,
but
we're
working
on
on
those
those
type
definitions
and
also
on
the
conversion
of
the
types
that
we
have
that
we
receive
in
graph
node,
along
with
the
triggers
to
pass
them
over
to
assembly
script.
C
There's
always
like
a
little
bit
of
transformation
that
we
need
to
make
to
map
the
rust
data
structures
to
assembly
script
memory
representations
there's
also
some
work
to
be
done
in
graph
cli,
because
right
right
now,
graphs
ui.
When
you
build
a
soft
graph,
you
deploy
subgraph
that
still
assumes
certain.
Certain
parts
are
ethereum,
always
ethereum
related
like
the
data
sources,
when
it
validates
the
manifest.
C
Initially
that
was
just
you
know:
ethereum
contact
data
sources
now
we're
moving
on
to
near
data
sources,
and
so
there's
some
work
there
to
be
done
on
like
validating
the
manifest
skipping
apis,
for
instance,
which
we
don't
have
one
nia
so
skipping
like
the
code
generation
that
we
normally
do
is
also
initializing
a
new
subgraph
for
near
a
graph
init,
which
currently
assumes
you're
building
a
subgraph
for
ethereum.
C
D
Yeah-
and
I
think
just
the
so-
the
yes
so
the
the
firehose
so
that
an
initial
sort
of
fires,
provider
sort
of
block
stream.
So
I
think
that
was
merged
in
in
in
last
week,
and
that
was
see
with
a
view
to
like
near
as
the
first
initial
use
case,
but
I
think
we're
also
verifying
the
gap
between
sort
of
enabling
that
that
sort
of
that
for
ethereum
subgraphs.
D
So
we
could
then
yeah
start
to
look
and
compare
the
sort
of
rpc
sort
of
existing
existing
like
integration
exists
within
graph
node
with
the
yeah,
with
the
fire
hose
with
the
with
the
blocks
that
are
coming
straight
from
the
fire
hose.
So
I
know
that
there's
a
bit
of
validation
and
testing
testing
going
on
there
this
week
as
well.
C
E
Maybe
a
small
note
on
the
ethereum,
so
there's
been
work
on
the
near
integration
for
fire
hose,
but
you
know:
there's
been
pressure
on
the
aragon
side
and
the
call
handlers
side
so
we're
putting
some
work
to
actually
shift
the
fire
hose
in
a
working
fashion
for
ethereum,
so
that
we
could
eventually,
you
know,
alleviate
the
risks
of
you
know
not
having
an
aragon
set
up
for
call
handlers
right.
So
there's
been
work
doing
being
done
on
on
that
front
too.
As
you
know,
open
ethereum
gets
deprecated.
E
If
you
don't
have
another
solution,
maybe
that
the
firehose
could
go
and
fill
that
gap
and
also
you
know,
shrink
simplify
the
setup
instead
of
an
archive
node
for
call
handlers.
The
fires
can
satisfy
be
a
little
bit
more
precise
on
so
in,
and
so
we
have
an
opportunity
window.
I
think
there
to
you
know,
make
a
dent
into
that
that
realm
too.
D
Yeah,
I
think
I
think,
there's
a
broader
in
like
interesting
thing,
because
the
firehose
does
unlock
slightly
more
precise
ordering,
which
isn't
currently
possible
with
the
rpc,
because
you
actually
get
the
real
sort
of
order
that
the
things
happened
on
chain.
But
I
think,
like,
as
we
think,
there's
a
quite
careful
sort
of
versioning
migration
compatibility
story
that
we
need
to
get
right
as
we
sort
of
move
as
we
move
forward
and,
like
start
adopting
the
files
for
more
sort
of
the
data
from
the
fires
for
more
things,
so
yeah.
G
On
that
topic,
I'd
love
to
hear
from
alex.
Maybe
if
you
could
add
a
little
bit
of
color
to
like
what
you
guys
are
doing
around
integration
testing,
to
sort
of
make
sure
that
you
know
the
new
integration
is
deterministic
and
is
there
anything
that
you
guys
have
done
there
that
you
think
could
be
applied
to
you
know
pulled
back
into
what
we're
doing
with
ethereum
and
other
chains.
E
Right
so
the,
and
also
for
the
ethereum
stack
the
same
way
the
data
is
produced
and
we
have
some
sort
of
testing,
for
you
know
running
a
node
and
outputting
its
data
and
comparing
we
usually,
we
have
a
thing
like.
Maybe
you
have
seen
called
battlefield
each
time.
We
discover
because
we
will
discover
issues
right:
a
data,
integrity,
error
or
some
conditions,
in
which
case
extraction
was
not
correct.
E
So
right
now
we're
in
the
process
of
reprocessing,
the
full
history
for
the
second
time,
because
we've
had
a
small
issue
and
we'll
want
to
do
that
and
that's
why
we
appreciate
having
that
done
in
parallel,
which
gives
a
big
crunch
on
the
gcp
instances
because
they
use
you
know
snapshots
from
the
disk
and
then
like.
It's
a
lot
a
lot
of
data,
but
so
this
allows
us
to
to
reprocess
and
refine
and
at
the
same
time
augment
what
we
call
the
battlefield
set
of
contracts.
E
And
you
know
when
we
discover
an
issue
we
find
out.
What
is
the
cause
of
the
issue
and
we
reproduce
it
in
that
small
chain.
The
battlefield
chain
for
us
is
a
chain
from
scratch
that
exercises
all
the
features
all
the
wonkiness
and
the
state
transition
that
we
would
find
to
produce.
Some
some
of
the
data
bits
that
you
know
were
were
were
wrong
or
something
like
that.
E
So
that's
part
of
sort
of
the
integration
testing
to
ensure
the
output
of
the
fire
hose
is
correct,
which
is
sort
of
separate
from
the
rest,
but
can
be
iterated
on
its
own
and
simplifies,
and
once
the
data
is
in
files
and
we're
satisfied,
then
all
the
rest
can
flow
out
without
much
hassle,
and
these
things
can
be
compared
per
implementation
also,
but
we
usually
do
that
for
for
open,
ethereum
and
geth.
E
You
know
in
two
separate
stages
and
we're
looking
in
into
some
other
things
we'll
see
if
everyone
needs
to
be
instrumented,
but
there's
a
few
other
chains
that
you
guys
wanted
to
support
there.
We
might
need
to
support
but
yeah
once
that
that
battlefield
is
applied
to
one
chain
in
the
other.
The
output
should
be.
You
know
the
same
until
we
find
a
bug
and
then
yeah
the
ethereum
stack
is
pretty
well
rounded.
E
We've
accumulated
now
near
we'll
want
to
go
to
the
same
thing
and
that's
going
to
be
an
iterative
process
as
we're
going
to
try
more
contracts
and
figure
out
how
people
use
those
contracts.
Maybe
there's
some
things
that
we
haven't
yet
figured
out.
These
are
complex
systems,
the
data
each
bit's,
changing
according
to
their
consensus
and
being
reverted
and
whatnot,
there's
a
lot
of
complexities,
and
so
we'll
learn
that
as
we
go,
but
we
want.
E
We
do
want
to
increase
the
reliability
of
at
least
that
layer,
because
I
think
it
can
detach
it
from
you
know
a
larger
scale,
end-to-end
testing
of
the
graph
of
the
graph
node.
Is
that
helpful.
E
I
don't
think
currently
battlefield
tests
against
the
json
rpc
output,
but
at
the
same
time
I
know,
there's
been
there's
been
some
some
questions,
like
the
the
instrumentation
done
in
fire,
hose
that
in
some
places,
where
there's
just
little
room
for
error,
because
when
the
contract
sets
a
value,
it's
the
code
that
sets
the
value
in
the
storage
and
the
memory.
E
The
data
is
output.
There
there's
little
chances,
there's
a
little
wiggle
room
for
for
error,
we're
actually
much
closer
to
what's
really
happening
than
the
rpc
endpoints.
So
we've
found
some
issues
by
fooling
around
in
the
past,
even
on
rpc
endpoints,
because
because
even
the
rpc
doesn't
always
have
a
point
of
comparison
right,
the
thing
is
they
do
the
work,
they
say.
Okay,
we
think
it's
good,
but
then
it's
only
when
we
work
in
these
things
and
we
try
to
figure
and
play
around
that
we
discovered
new
issues.
E
So
no
the
fire
hose
doesn't
have
a
comparison
per
se,
because
it's
a
new
way
of
doing
things.
We
constantly
try
to
check
against
other
sources
and
we
find
bugs
here
and
there.
So
I
would
say,
in
terms
of
you
know,
data
quality,
it's
an
ongoing
process
and
we're
still
discovering,
like
I'm
thinking,
arigon
or
other
other
implementations,
still
discovering
issues
that
no
one
cross-checked
in
years,
because
you
know
they
have
their
implementation,
but
they
won't
do
five
implementations
either
right
to
compare.
E
So
it's
still
a
tricky
situation,
but
it's
better
because
we
have
two
and
we
can
test,
but
the
end-to-end
suite
that
we
have
there
for
battle
testing.
Sorry,
the
battlefield
test
suite
does
not
compare
with
json
rpc
just
to
be
explicit.
D
F
D
I
was
just
gonna
say
I
think
that
is
definitely
an
area
where
we
wanna
create
this,
like
slightly
more
robust
source
standardized
framework
that
any
like,
certainly
is.
There
are
more
teams
contributing
that
anyone
can
sort
of
pass
things
through
and
also
say
that,
because,
like
the
wider
ecosystem
kind
of
confidence,
when
new
releases
come
out
that
yeah,
essentially
there
won't
be
any
unexpected
things
if
you're
using
a
client
that
doesn't
have
to
be
one
of
the
ones
we
test
with.
D
So
it's
definitely
something
to
think
about
the
next
couple
of
weeks
and
months,
yeah.
G
I
think
something
we'll
be
probably
talking
more
about
on
the
core.
Dev
side
is
end-to-end.
You
know
kind
of
integration
testing,
so
not
just
the
fire
hose.
You
know
part
of
the
data
pipeline,
but
all
the
way
down
to
the
query
side
of
things.
You
know
semiotic
on
the
call
they
developed
this
really
useful
query
generation
tool.
You
know
where
we
think
we've
already
spotted
a
couple.
G
Determinism
issues
on
the
query,
execution
side
of
things-
and
you
know
if
our
learnings
from
over
the
summer
taught
us
anything-
was
that
when
we
kind
of
you
know
really
opened
things
up
on
the
you
know,
poi
disputes.
We
discovered
all
these
sort
of
poi
determinism
issues,
but
it
was
a
lot
harder
of
a
debug
cycle
because
we
were
sort
of
working
with
indexers.
You
know
in
in
public
trying
to
spot
these
things
and
in
an
ideal
world
you
know
we
would
spot
these
things
ahead
of
time.
G
With
the
you
know,
our
integration
testing
suite
so
we're
not
doing
these
sort
of
debug
cycles
in
the
forums
or
you
know
over
discord.
A
H
One
of
the
projects
that
I'm
working
on
right
now
is
a
fully
self-contained
integration
testing
environment
that
will
allow
its
will
allow
core
dev
members
to
run
a
full
graph
network
on
their
laptop
or
whatever
machine
they
have
so
right
now
at
edunode
we
have
some
testing
infrastructure
that
we
deployed
to
our
cloud
infrastructure,
but
that's
really
not
easily
shareable.
H
H
It'll
deploy
an
ipfs
node,
a
full
index
or
infrastructure
with
index
or
agent
graph,
node
and
index
or
service
a
subgraph
studio,
api
server,
a
gateway
fisherman,
subgraph,
availability,
oracle
and
then
it'll
it'll
set
things
up
by.
You
know
publishing
a
subgraph
to
the
network
minting
signal
on
that
subgraph
staking
for
the
indexer,
and
then
you
can
have
a
full
graph
network
testing
environment
to
test
queries
and
then
test
the
various
components
for
for
developers
that
are
working
on
them.
A
Well,
thanks
for,
I
think
it's
going
to
be
quite
useful
once
you
start
integrating
all
of
these
networks
and
we
need
to
test
things
end
to
end
for
sure.
A
Okay,
we
mentioned
semiotic
and
we
actually
have
sam
in
the
call
as
well
and
yeah.
Chris
webster's
is
saying
something.
Let
me
check
the
chat
yeah
for
sure
I
agree
right,
so
we
have
semiotic
in
the
call
as
well
so
last
week,
not
last
week
from
the
last
time
we've
had
this
call.
We've
had
some
updates
on
the
curator's
profit
scouting
analysis.
A
I
think
that
in
the
in
the
in
the
previous
weeks,
they've
they've
done
more
analysis
and
I
think
we
have
some
interesting
new
insights.
We
could
share,
and
maybe
once
yeah
and
and
maybe
once
we're
done
with
this
analysis,
maybe
we
can
segue
into
the
network
protocol
upgrades
and
have
ariel
talk
about
a
little
bit
about
these
new
protocols
as
well,
so
sam,
if
you're
there
do.
You
wanna
share
the
interesting
findings
we
have.
I
know
curation
has
been
a
hot
topic.
A
I
think
folks
will
be
interested
to
know
what's
what
what
your
major
findings
are
not.
I
That
yeah,
so
the
last
time.
Last
time
we
met,
we
had
just
finished
a
preliminary
analysis
and
basically,
we
were
using
a
very
let's
say,
coarse
threshold
on
what
was
being
considered
or
flagged
as
profit
scalping.
I
So
that
was
basically-
and
you
know,
subgraph
publishers
are,
of
course,
curating
a
lot,
and
so
now
what
we
do
is
so
anybody
we
still
call
anybody
a
profit
scalper
if
they,
if
they
signal
within
two
minutes
of
the
launch
of
the
subgraph,
unless
they're
a
subgraph
publisher
and
they
get
you
know
they
don't
get
labeled
as
a
scalper
anymore,
unless
they
both
publish,
signal
and
burn
within
one
week
of
when
they
launch
their
subgraph.
I
And
so
I
have
just
a
few
slides,
basically
with
plots
summary
plots,
but
I'll
be
talking
about
profit
quite
a
bit
and
what
I
define
as
profit
is,
and
everything
is
going
to
be
in
grt,
so
I've
convert
everything
is
being
converted
to
grt.
So
not
so
it's
being
converted
from
graph
curation
shares
back
to
grt,
so
profit
is
being
defined
as
the
grt
withdrawals,
plus
the
position
value.
I
You
could
also
call
that
the
unrealized
value
of
a
current
position,
and
that
just
means
how
many
grt
would
would
a
curator
get
if
they
were
to
burn
all
the
shares
that
they
currently
have
and
then
minus
the
amount
of
grt
they
initially
deposited
into
the
subgraph
okay.
So
with
the
new
analysis,
we
see
that
publishers,
so
honest
publishers
have
made
profit,
have
made
a
profit
aggregate
profit
of
800
000
grt.
I
I
I
Deposits
to
get
these
rois
and
we
see
that
this
is
yeah.
So
here
is
the
honest,
the
histogram
curators
and
the
scalper
curators.
So
we
can
see
you
know
it's
mostly
honest,
curator
activity-
and
here
this
is
just
a.
I
thought
this
was
fun.
This
is
an
outlier.
Some
one
of
the
honest
curators
has
made
a
killing
a
22x
roi.
I
If
we
zoom
in
here,
we
see
a
little
bit
a
little
a
little
some
more
information.
So
this
is
I'm
just
zooming
in
to
the
to
the
0
to
2x
roi.
I
We
see
that
most
most
of
the
honest
curators
have
actually
they've
actually
lost
money.
They
have
about
a
point:
seven
five
roi
right
now,
so
they've
lost
lost
about
25
percent.
That's
what
this
says
right
here.
On
the
other
hand,
we
see
that
it,
the
scalpers.
I
We
see
that
they've-
they
in
general
have
done
a
little
bit
better,
so
they're,
either
breaking
even
so
one
an
roi
of
one
means
breaking.
Even
so,
it's
just
slightly
shifted
to
the
right.
I
What
this
doesn't
capture
is
the
recent
curation
bootstrapping
rewards
what
those
rewards
have
done
and
these
rewards
aren't
captured.
So
all
this
data
is
from
on-chain
data
related
to
curation
events.
It
doesn't
capture
the
airdrop
airdrops
of
curation
rewards.
I
So
what
that,
what
those
boots?
Sorry,
the
bootstrapping
curation
rewards
and
what
that
will
have
done,
and
I'm
going
to
do
this
later,
I'm
going
to
add
that
information
in
later
that's
going
to
shift
all
of
this
blue
stuff
to
the
right,
and
so
I
I
expect
that
actually,
at
this
point
most
of
the
curators
probably
have
a
an
roi
greater
than
one
so
most
bootstrappers,
I'm
I'm
going
to
expect-
are
doing
their
their
positive
on
their
investment.
I
I
Let's
go
back
to
that
because
maybe
some
other
people
missed
that
too.
So
a
curator
is
being
labeled
a
profit
scalper
if
they
signal
within
two
minutes
of
a
subgraph
publication,
the.
F
There's
an
issue
with
that
definition
because
the
most
the
most
known
bots-
they
listen
for
the
first
curator
to
signal
on
the
setblock
before
they
front
and
down
that
initial
transaction
and
mint
on
that
subgraph.
Okay,
so
that
that
initial
initial
curation
don't
need
to
happen
within
two
minutes
of
the
publication.
F
Common
part
right
now
they
listen
for
the
first
curate
of
the
signal
and
then
they
just
continue
that
signal.
So
that
can
be
30
minutes
that
can
be
a
week
later,
but
they
just
make
sure
to
be
the
first
on
the
bonding
curve.
They
don't
they
don't
listen
to
the
publication
itself,
but
that's.
G
And
who
would
be
the
objective
like
on-chain
way
of
identifying
someone
that
sort
of
front
ran
versus
was
just
you
know,
happened
to
be
curating
at
the
same
time?
Is
that
something
orion
that
you
think
you'd
need
to
like
actually
watch
the
mempool
for?
Do
you
think
there's
a
way
that
we
could
get
that
from
just
on-chain
data.
F
I
So
what
like,
like
brandon
mentioned,
would
would
that
require
monitoring
the
mempool.
Would
that
be
like
an
analysis?
We'd
have
to
do
moving
forward.
G
D
We
can
recover
that
from
the
from
the
transaction
hushes
because
they
are
in
the
graph
sub
graph.
G
I
It
okay
yeah,
so
that
may
be
very
interesting
to
go
back
and
and
capture
that
activity
as
well.
That
would
be
that
would
be
pretty
conclusive.
I
think
that
someone
was
doing
front
running
explicitly
and
not
just
rushing
to
a
rushing
to
curate
on
a
new
subgraph,
so
I
guess
one.
One
last
note
I
have
on
this
before.
I
show
a
quick,
a
demo
of
a
tool.
That's
coming
out
of
this.
I
Of
of
that
analysis
is
that
currently
you
know
last
week,
brandon
or
last
time
we
met
brandon,
he
presented
on
work,
he's
doing
for
capital
gains,
tax
related
to
graph
curation
and
right
now,
our
subgraph
duration.
So
right
now,
basically
the
way
the
system
is.
Is
you
know
we're
talking
about
this
mempool
we're
talking
about
front
running
and
right
now
it
is.
I
It
is
possible
to
basically
deterministically
make
profit
by
doing
front
running
basically
by
using
flashbots
type
solutions,
and
so
once
these
it's
it's
going
to
be
it's
going
to
be
really
important
to
getting
the
getting
the
capital
gains.
Tax
changes
in
place
is
going
to
be
important
to
discourage
this
deterministic
front-running
ability,
and
it
will
make
it
more
once
that's
in
place.
It's
gonna
make
it.
I
I
I
think,
even
after
the
capital
gains
tax
is
put
in
place,
but
it
it
ties
up
capital
for
a
longer
period
of
time
which
is
going
to
dissuade
this
is
these
sort
of
extractive
behaviors,
and
I
guess
one
last
note
is
that
right
now
most
of
the
scalpers
aren't
making
any
money
they're
using
this
as
basically
an
ineffective
casino
like
brandon,
that
was
brandon's
analogy
last
time
and
they're,
basically
just
a
menace
and
they're,
causing
they're,
causing
it's
just
noise
for
people
who
want
to
be
honest,
curators
they're,
they're,
not
they're,
not
being
they're,
not
very
effective.
I
G
A
Yeah,
I
have
a
question:
do
you
have
the
volume
of
grt
transact
by
the
scalpers.
A
Grt
is
actually
moves
from
from
subgraph
to
another
by
skypers.
I
I
I
do
have
access
to
that
yeah
yeah,
maybe
joseph,
maybe
you
and
I
can
I'll
message
you
and
let's
we
can
do
a.
I
can
look
at
other
metrics
if
just
to
echo
what
brandon
just
said
you
know
these.
These
scalpers
are
basically
these
techniques
that
they're
using
they
could
be
refined
and
it
could
be
guaranteed
profit
right
now
if
if
their
approaches
were
refined,
so
that's
the
key
point.
I
That's
a
key
point
this.
These
these
actors
need
to
be
removed
from
the
system.
Okay.
Finally,
I
have
a
a
quick
demo,
so
this
is
a
so
with
with
this
demo.
Basically,
what
I
had
what
we
needed
to
do
for
this
analysis
was
measure
the
position
value
or
the
unrealized
value
of
a
position
for
all
curators
over
time,
and
I'm
gonna
be
showing
you
some
an
interactive
plot,
that's
being
done
with
python,
but
at
a
high
level.
I
What
I'm
gonna
show
you
eventually
we're
going
to
or
what
we're
working
on
is
we're
porting
it
to
javascript
so
that
this
this
visualization
can
be
included
in
in
community
tools,
for
example,
tools
being
built
by
the
graph
curation
station.
I
A
I
Okay,
okay,
so
here
we
see
on
the
x-axis,
we
have
ethereum
block
on
the
y-axis.
We
have
position
value,
we're
looking
at
the
omen,
sub-graph
and
we're
looking
at
this
curator,
and
this
curator
is
actually
also
the
publisher
of
the
sub-graph.
I
And
what
we
see
here
is
that
the
this
plus
25
000
means
that
they,
they
signaled
25
000
grt
in
their
sub
graph.
I
At
this
point
in
time,
and
these
other
gray
bars,
these
are
the
these
are
the
position
values
of
other
curators
that
are
minting
in
this
that
are
minting
shares
in
this
omen,
subgraph
deployment,
and
then
why
is
the
position
value
of
this
publisher
going
up
at
this
point
in
time?
Well,
just
after
they,
they
signaled
this,
this
curator
signaled,
and
so,
if
you
zoom
in
you'd,
be
able
to
see
that
and
that
pushed
up
their
value
a
lot.
I
Actually
it
was
this
person
and
and
this
curator,
so
it
increased
the
value,
actually
there's
another
one
that
you
can
barely
see.
If,
if
I
switch
here
this
person,
who
is
a
who's
flagged
as
a
scalper,
they
came
in
right
after
the
subgraph
publisher,
and
it
just
happens
that
the
math
works
out
so
that
their
position
value
overlays.
Perfectly
note
that
this
scalper
still
holding
even
up
until
this
to
the
date
of
this
analysis
they
haven't
sold.
I
If
they
would
have
sold,
then
it
would
have
shown
here
so,
for
example,
this
this
curator,
who
is
actually
yeah.
So
sorry,
sorry,
so
this
curator
they
they
minted
at
8
000
and
then
they
sold
at
a
loss
of
oh
sorry.
They
sold
later
at
eight
thousand
276.
So
they
made
a
profit,
it's
green
because
they
made
a
profit.
If
they
would
have
made
a
loss,
then
then
they
would
have
a
red
red
plot.
I
So
the
javascript
version
is
going
to
be
interactive
when
you
hover
over
and
click
on
the
plot.
It's
going
to
show
some
information
about
the
curator
you'll
be
able
to
click
on
it
and
you'll,
see
we'll
also
print
out
some
summary
statistics
and
it's
going
to
be
designed
so
that
it
can
be
integrated
with
other
with
the
main
curator
tools.
I
So
if
anybody
has
any
suggestions
for
how
to
make
the
the
plot
better,
please
let
me
know,
and
then
we
can
integrate
those
with
the
with
what
we're
building
for
the
community.
A
Amazing
thanks,
I
think
orion
was
a
blast
playing
around
with
this
for
sure,
and
we
also
have
the
curation
station
guys
doing
some
great
analysis
as
well,
and
they
will
actually
receive
a
grant
from
the
foundation
to
to
work
on
a
gy.
I'm
guessing
this
tool
will
be
supe
super
helpful.
So
thanks
a
lot
with
this
in
mind
that
we
have
a
couple
of
minutes.
I
I
want
to
ask
oliver:
do
you
want
to
do
a
brief
update
or
provide
some
context
on
the
protocol
update
great?
Let's.
B
Give
me
some,
let's
start
with
some
things
that
are
further
downstream
publish
and
mint,
was
a
feature
that
we
have
implemented
last
week
fully
implemented
in
the
protocol.
It's
live
today,
which
is
the
ability
to
deploy
and
signal
or
to
publish
and
signal
on
a
on
a
sub
graph,
been
a
long-awaited
feature.
So
that's
quite
exciting
that
we
have
it
now.
We
also
have
reduced
creation
tax
from
two
and
a
half
down
to
one
percent
going
through
council
votes.
B
Starting
shortly
today
we
have
a
council
meeting
where
it's
going
to
be
discussed
and
shortly
after
we're
going
to
have
a
snapshot
council
board
on
that
in
the
forum
we
have
the
simplified
cut
mechanisms
where
we
actually
have
come
across
over
the
last
few
weeks,
something
that
wasn't
probably
well
aware
in
the
community
that
the
new
proposal
had
a
settings
mechanism
from
zero
to
100
cut,
which
removes
an
ability
that
is
currently
available
to
indexers
to
set
negative,
effective
cuts.
B
So
we
did
a
snap,
a
portal,
a
forum
poll
on
that
to
get
a
gate
from
how
the
community
thinks
of
that
and
what
we
have
seen
as
a
result.
Coming
back
is
that
two-thirds
of
those
that
have
participated
in
the
polling
favor
a
range
of
minus
100
to
plus
100,
so
that
you
know
essentially
the
ability
to
set
negative
cuts
stays
in
place.
J
Yes
yeah
these
are.
These
are
two
changes
that
are
very
related
to
the
to
the
to
the
same
thing
or
same
story
about
making
life
easier
for
app
developers.
You
mentioned
like
reducing
the
creation
tags,
it's
going
to
make
it
easier
and
reduce
the
friction
for
them
to
upgrading.
J
At
the
same
time,
we
have
this
mint
and
and
published
like
a
bundle
most
of
the
time.
The
app
developer
is
the
one
like
signaling
for
the
first
time.
So
that's
that's
another
improvement
and
I
proposed
a
couple
of
things
in
the
forum
related
to
ownership
of
the
of
the
subgraphs.
J
The
the
important
thing
is,
or
one
thing
that
we
identified
is
that
the
and
it
was
discussed
in
the
forums
is
that
the
whenever
you
create
a
publish
and
use
a
graph?
You
are
the
complete
owner
of
that,
but
you
can
transfer
them
effectively
like.
There
are
some
cases
where,
like
maybe
a
developer,
we
didn't
know
for
the
projects
wants
to
create
the
subgraph,
because
it's
easier
using
their
private
keys,
but
they
they
would
want
to
transfer
it
to
a
multi-seal,
or
maybe
a
community
member
created
the
sakra.
J
For
for
a
project,
and
they
want
to
transfer-
and
that's
that's
hard
with
the
current
implementation,
so
what
I'm
proposing
here
is
to
manage
ownership
using
an
nft.
So
whenever
someone
publish
and
use
a
graph,
you
get
an
nft.
Whoever
owns
the
nft
can
manage
the
main
actions
of
of
a
subgraph
that
is
upgrades,
deprecation
and
changing
the
metadata
so
that
basically
the
this
new
implementation
that
is
done
in
rpr,
that
needs
audits.
J
Now
it's
like
implementing
this
idea:
okay,
having
an
ft
to
represent
ownership
again,
an
nft
is
a
good
thing
because
it's
transferable
it's
a
standard.
So
it's
it's
very
good
for
this
use
case.
An
additional
thing
about
this
implementation
is
that
it's
have
some
improvements
in
the
interfaces.
J
We
were
using
like
a
graph,
a
combination
of
two
keys
graph
accounts
and
it's
a
graph
number
to
represent
as
a
graph,
but
in
this
new
implementation
we
are
just
using
a
single
id
like
a
one
primary
key
to
represent
a
sub
graph.
This
make
things
much
easier
for
everything
like
when
you
are
going
to
interact
with
the
contract,
but
it
also
make
it
easier
to
represent
the
sub
graph
as
an
f2,
because
nfts
has
a
token
id,
that's
a
single
id,
so
it
makes
everything
better.
J
So
I
invite
you
to
discuss
about
this
proposal.
You
can
see
the
implementation
in
the
vr
and
we
are
like
scheduling
an
audit
with.
We
have
an
upcoming
audit
with
sibling,
so
this
this
month
in
about
like
two
weeks
a
week
and
a
half.
So
this
is
the
main
idea
about
transfer
ownerships.
J
J
Yeah,
the
the
network,
the
next
one
is,
it's
also
like
an
improvement
that
it
will
make
things
easier
for
app
developers.
Thus,
the
way
the
creation
contract
works
is
like
whenever
you
meet
for
the
first
time
signal
for
the
first
time,
there's
some
initialization
process.
That
is
happening.
J
That
means
like
setting
up
the
curve
with
the
reset
ratio.
But
apart
from
that,
we
are
deploying
an
erc20
token.
Each
subgraph
has
an
erc20
token
that
represents
a
share
of
a
subgraph.
J
We
did
it
that
way
because,
like
it's
easier
to
create
composability
on
top
of
a
graph,
in
fact,
the
gns
is
composing
the
curation
contract
by
buying
shares
represented
by
tokens
from
the
creation
contract,
but
this
is
quite
costly
because
we
are
deploying
near
c20
tokens
whenever
you've
been
for
the
first
time,
that's
taking
roughly
a
million
tokens,
a
million
gas
and
that's
very
similar
to
how
unisof
works
whenever
you
create
a
new
pool,
but
there's
an
improvement
that
we
we
did
with
the
with
the
pr
that,
basically,
I
I'm
proposing
that
is
instead
of
deploying
an
erc20
token,
each
time
like
deploying
all
the
pi
code.
J
We
are
deploying
clones
of
bad
buy
code
that
is
already
deployed
in
in
mainnet.
So
that
way
this
is
a
technique
called
minima
minimal
proxies
it's
used
in
many
places.
In
fact,
nozzy
safe
multi-sigs
are
sort
of
minimal
proxies
of
the
implementation,
so
we
are
using
the
same
thing
in
the
creation
contract
and
it's
going
to
reduce
the
gas
cost
like
quite
a
lot
like
3x.
I
would
say
it's
going
from
some
calculations.
I
did
from
1.2
million
for
to
400k.
J
It
would
be
great
to
make
it
even
lower,
but
I
think
with
this
proposal
we
are
reducing
it
quite
a
lot.
So
again,
this
is.
This
is
implemented
in
a
pr.
It
requires
audits,
but
it's
there.
So
you
can
add
any
comments.
Okay,.
B
J
Yeah,
there
are
a
couple
more.
There
are
two
more.
These
are
more
of,
like,
I
would
say,
improvements
more
like
related
to
bug
fixes,
but
not
critical,
this
one.
This
is
something
that
happened
at
least
a
sort
of
an
edge
case
whenever
you
delegate,
as
very
small
amount.
J
Let's
say
one
way
you
could
get
serial
shares
if
the
ratio
of
if
the
ratio
of
of
deposits
and
the
amount
to
to
get
out
from
shares
is
very
like
it's
very
low,
so
this
is
this
is
fixing
that
condition
it's
going
to
revert.
If
you
are
like
depositing
such
a
small
amount
of
tokens
that
you
are
not
going
to
get
any
shares
of
delegation
pool,
this
is
not
happening,
but
it's
something
that
we
identified
and
we
are
like
fixing
this
beer.
Okay.
J
So
it's
not
very
critical
this,
the
the
next
one,
this
this
proposal
is
to
add
a
fix
to
one
stake.
This
is,
I
would
say
more.
This
is
more
important
than
the
last
one.
This
is
a
condition
that
we
identified
by
talking
one
of
one
of
the
auditors.
That
could
happen,
and
this
is
like,
if
you
want
stake
fully
as
an
indexer,
you
want
to
understak
the
full
amount
there
would
be.
J
There
could
be
this
condition
where
someone
stakes
a
small
amount
of
tokens
to
to
your
indexer
using
state
two,
and
we
have
a
validation
that
the
stake
should
be
always
like
be
larger
than
the
minimum
amount,
and
by
someone
from
running
your
stake
transaction,
they
could
make
your
unstable
transaction
transaction
to
revert
by
staking
a
a
small
amount
of
tokens.
Okay,
so
this
is.
This
is
sort
of
front
running
the
unstake,
the
unsafe,
fully
right.
So
we
fixed
this
in
the
in
apr.
J
It's
already
audited,
it's
ready
to
be
discussed
by
the
the
council
and
the
on
the
community,
so
we
can,
like
the
community,
can
do
the
upgrade
and
the
console
vote
on
that.
Okay,
so
both
these
the
two
ones.
I
mentioned
that
the
last
ones
I
mentioned
are
already
audited.
Okay,
so
are
ready
for
for
upgrade.
If,
if
the
console
decides
to.