►
From YouTube: The Graph - Core Devs Meeting #10
Description
Core Developer Meeting #10 discussing updates within the protocol
The Graph's Media:
Twitter: https://twitter.com/graphprotocol?s=20
Instagram: https://instagram.com/graphprotocol
LinkedIn: https://www.linkedin.com/company/theg...
Website: https://thegraph.com
A
A
Now
today's
agenda
is
a
bit
packed
so
I'll
try
to
be
a
good
time
keeper,
starting
with
a
general
update.
So
next
week,
we'll
have
our
second
r
d
retreat
with
around
50
people
attending
from
seven
different
teams,
which
is
quite
impressive,
I'd
say
so.
A
lot
will
happen
this
week
and
I'm
sure
the
next
core
dev
call
will
be
very
interesting.
A
We
are
like
we
are
formalizing
the
concept
of
working
groups
having
different
teams
working
closely
together
around
a
set
of
common
focus
areas.
So
during
the
next
core,
dev
call
we'll
dedicate
some
time
to
talk
about
these
working
groups,
sharing
more
info
from
information
around
all
of
their
related
work
streams,
so
so
that
you
guys
know
exactly
what's
going
on
on
the
background.
A
We'll
also
present
the
teams
that
you
haven't
heard
from
before,
such
as
block
science,
prism
group
and
as
an
example
semiotics.
Somebody
can
also
share
more
about
their
current
plans
and
and
work
around
the
cost,
automation
modeling
now,
focusing
mostly
on
the
subgraph
api
working
group.
We
can
start
by
hearing
from
the
different
teams
on
recent
updates
and
I
think
we
can
start
alphabetically
as
always,
edunote
adam,
are
you
up
for
it.
B
B
I
can
go
I'll
share.
My
I'll
share
my
screen.
If
I'm
allowed
to
do
that
perfect
so
yeah,
so
we
run
through
just
the
regular
format
which
we
do
every
week,
so
sort
of
four
boxes
where
we
talk
about
starting
with
risks,
problems
and
help.
So
we've
quite
a
lot
of
on
graph
note
itself.
We've
got
quite
a
lot
of
pull
requests,
sort
of
waiting
to
be
either
sort
of
merged
or
deployed
to
those
services.
B
First
of
testing
testing
ground
we've
got
some
pi
investigations
which
are
ongoing,
where
I
think
we've
got
a
lead,
but
we
do
have
quite
a
lot
of
features
across
sort
of
edgernode,
but
then
also
other
teams
that
are
sort
of
backed
up
on
that.
So
hopefully
we
can
clear
that
out
as
soon
as
possible.
B
Along
those
lines,
we've
got
a
lot
of
work
in
progress
stuff
which
we
want
to
get
ships,
and
then
we've
got
quite
a
lot
of
open
discussions
and
the
session
next
week
in
the
sort
of
r
d
meetings
will
hopefully
help
us
resolve
some
of
those
things
around
the
availability
chain
which
we
talked
about
before
and
then
some
bigger
architectural
things
which
we've
been
discussing.
B
I
think
I
don't
think
janice
is
on
the
call,
but
I'm
sort
of
got
a
in
in
the
last
couple
of
days
of
pi,
query
cross
checking
mvp
demo,
which
is
good.
It's
so
that's
something
that
can
be
used
in
testing
or
on
the
network
for
monitoring
pois
across
different
indexes
or
graph
node.
B
We
have
quite
a
lot
of
active
things,
so
graph
qr
validations
didn't
quite
finish
the
sentence
apparently,
but
that,
like
this
work,
was
unblocked,
but
we've
we
still
need
to
like
help
some
dapps
whose
queries
will
break
when
four
validations
are
applied.
Then
we've
got
quite
a
few
things
which
I
think
we'll
touch
on
later
in
the
session.
So
we've
got
quite
a
lot
of
pipelining,
that's
they'll
be
going
on.
B
So
that's
paralyzing,
some
of
the
functionality
within
graph
node,
which
should
speed
up
indexing
in
general.
We've
got
quite
a
few
new
graphman
commands
for
essentially
making
it
easier
to
manage
the
block,
cache
and
some
other
things,
things
which
we've
previously
required
some
database
surgery
and
then
we've
got
some
ongoing
things
which
we've
talked
about
for
a
while
like
file
data
sources
and
then
immutable
entities
is
one
thing
and
it's
a
lot
of
these
things
are
pointing
towards
performance
and
just
improving
the
syncing
speed
of
graph
node.
B
So
that's
thing
which
we
haven't
sort
of
fully
rolled
out:
that's
blocked
by
the
thing
I
mentioned
above,
but
that
will
essentially
make
it
quick
to
write
to
database
but
then
also
to
query
in
certain
cases.
So
yeah
performance
has
been
a
pretty
big
focus.
Next
steps.
A
lot
of
the
folks
can
run
on
the
architecture.
Discussions
make
sure
we
get
the
most
out
of
the
time
together.
We
have
next
week,
so
I
will
stop
there
unless
there's
anyone
else
from
edge
and
node
stuff.
C
I
can
do
a
quick
update
on
the
indexer
component
software
cool,
so
I
I
don't
have
a
screen
here
so
I'll
just
do
so
verbally,
but
I
can
just
give
an
update
on
what
we've
done
the
last
month
and
then
stuff.
We
have
in
progress
specifically
on
the
indexer
side,
so
we've
had
a
higher
velocity.
The
last
month
we've
had
some
some
engineers
coming
in
to
help
out
on
from
figment
hope.
Specifically.
C
C
We
have
management
of
off
chain
subgraphs
via
the
indexer
cli.
Now
so
you
can
manage
your
off-chain
subgraphs
list
via
the
cli
without
restarting
your
index
or
agent,
and
we
have
indexing
rules
by
subgraph
id.
So
you
can
define
your
rules
by
a
subgraph
id
rather
than
deployment
id,
and
then
your
index
or
agent
will
automatically
manage
version.
Switching
and
supporting
the
current
version
and
the
previous
version
of
that
subgraph.
C
So
we
actually
have
some
other
things
too,
but
I'll
stop
there,
because
I
don't
want
to
take
up
the
whole
meeting
so
really
exciting
stuff,
with
a
increased
velocity
on
on
the
indexer
software
and
we'll
be
prepared
pretty
soon
to
start
working
with
semiotics,
to
integrate
some
machine
learning
and
more
advanced
tooling,
to
supply
strategies
for
the
initiators.
A
A
D
Here
we
go,
we
usually
write
the
full
date
2022,
but
mark
on
our
team
found
this
date
to
be
very
special.
Two
two
two
two.
So
here
we
go
so
to
start
with
the
achievements
we
opened,
the
thunderman
pr's
graph
notes
li
graftiaz.
We
are
super
excited
about
it.
While
we
do
it,
we
fixed
the
block,
hash,
mismatch
bug
and
on
on
the
tendermint
integration.
D
I'm
gonna
jump
to
what's
active
right
now,
so
we
are
troubleshooting
transitioning
that
enderman
integration
we're
doing
like
full
testing
from
the
fire
halls
all
the
way
to
the
graph
node,
and
why
we're
doing
it?
We
ran
into
some
issues
with
fire
hose,
so
our
team
is
gonna
work
on
it
today,
if
not
they're
gonna
reach
to
the
firehouse
experts
here
for
some
help
and
we
updated,
we
are
actually
currently
working
on
updating
our
documentation
for
tetherman
type
generation.
D
Then
up
next
we
are
working
with
simicfest
on
documentation
and
then
on
our
team
created
this
dummy
chain
that
could
be
used
by
all
the
indexers
community
new
hires
to
to
learn
about
the
firehose
stack
test
it
and
have
like
a
dummy
chain
that
is
easy
to
use
for
further
learnings.
D
D
E
F
Okay,
I'll,
go
cool,
so
weekly
update
for
the
guild
in
terms
of
problems
and
help.
I
think
we
need
some
help
with
some
code
reviews.
This
is
becoming
like,
I
would
say,
like
a
minor
bottleneck,
we
have,
if
you're
ongoing,
that
probably
needs
some
discussions
and
reviews
before
moving
forward
achievements.
We
onboarded
two
more
guild
members.
One
of
them
is
also
like
a
new
guild
members.
F
Member
and
actually
both
of
them
are
new
guild
members
and
just
joined
the
work
on
the
graph.
So
we're
super
excited
about
it.
We
have
validations.
I
will
keep
that
and
talk
about
it
later,
because
we
have
a
whole
section
about
it.
We
have
a
few
more
prs
and
uprs
waiting
for
review
active
work,
so
we
started
to
talk
a
lot
about
the
new
graph
client.
F
We
have
some
discussions
for
the
short
term,
what
we
can
introduce
now
with
the
existing
tools
and
what
we
can
do
in
the
longer
term,
like
with
ceramics
and
probably
with
some
store
that
lives
in
the
client
and
later
things.
But
this
is,
I
would
say,
like
a
broader
discussion,
we
started
with
the
relay
compliant
graphql
schema.
F
This
is
we
have
a
pr
just
started
with
the
graphql
part
later
we'll
deal
with
the
actual
sql
and
the
way
that
we
building
cursors,
this
might
be
a
bit
tricky
charlie
from
our
team,
is
working
on
an
article
on
graphql
best
practices
from
the
point
of
view
of
the
consumer.
F
This
will
probably
be
ready
in
a
few
days
still
working
on
the
collaboration
with
synthetics.
We
are
almost
done
with
the
typescript
query
builder.
They
built
it.
We
are
now
just
refactoring
it
and
re-implementing
it
as
part
of
the
coding
plugin.
This
is
almost
done,
and
the
end
and
or
filters
are
also
active,
going
to
be
ready.
I
guess
in
a
few
days
next
steps
working
on
on
some
small
fixes
and
changes
in
graph
node.
One
of
them
is
updating
graphical
to
latest.
F
I
think
it
also
works
worth
considering
to
update
it
in
all
places
where
we
have
graphical
or
we
depend
on
graphical,
because
this
might
be
an
annoying
security
issue
that
was
recently
found
there
yeah
we
plan
to
do
an
introduction
to
the
hosted
service
with
david.
We
planned
it
for
last
week,
but
it
didn't
happen.
F
Let's
try
to
make
it
happen
soon,
because
we
really
want
to
learn
more
yeah
next
step,
talking
about
like
the
sub
components
of
the
graph
client
and
how
we
can
try
to
break
it
to
smaller
pieces
and
start
with
the
actual
implementation
yep.
That's
all.
E
Who's
next
now
I
don't
have
a
list.
Is
it
me
now
you
go
alex
yeah,
okay,
so
I've
changed
a
little
of
the
format.
You
see
that
here,
oh
pretty,
nice.
Well,
I
stole
that.
It's
not
me.
So,
did
some
solana
work
a
lot
of
salon?
Work
has
been
done,
we're
continuing
a
lot
of
solana
work.
A
lot
of
the
uncertainty
about
history
has
been.
I
think
I
said
that
maybe
earlier
history
reprocessing
has
been
a
de-risk,
although
there
are
some
other.
E
You
know
complexities
to
go
down.
All
the
history
cost
is
going
to
go
exponential
as
we
as
we
want
to
go
back
to
genesis,
but
that's
fine.
So
now
we're
looking
forward
for
solana
to
to
keep
its
pace
so
have
it
running
for
several
days
and
weeks.
So
we
can
be
sure
that
performance,
wise
and
it's
stable
and
it
produces
the
data
we
need.
You
know
the
sort
of
latency
that's
needed
without
sort
of
drifting
and
we're
working
with
the
selena
foundation
to
make
sure
we
implement.
E
We
finish
implementing
that
fast
boot
mechanism,
which
has
like
tremendous
interestingly
interesting,
sorry
properties
that
if
we
can
reduce
the
boot
time
of
a
node
by
like
right
now,
20
25
minutes
down
to
three
minutes
or
four
minutes,
then
it
means
like
an
hour
of
drift
that
we
can
avoid
so
we're
working
with
them,
we're
near
having
a
a
pr
that
goes
into
the
the
graph.
Node
it'll
be
useful
for
everyone,
not
only
those
who
use
the
fire
hose.
E
Okay,
so
that's
for
solana
and
then
we've
been
doing
a
lot
of
fire
hose
indexing
stuff.
Remember
all
these
things
to
to
to
allow
us
to
go
through
a
sparse,
sub-graph
first
step
was
that
irreversibility
index,
which
has
been
done.
Steph
has
pushed
that
all
everywhere
we
can
and
we're
building
those
indexes,
so
that
will
speed
up
sort
of
linear
processing
in
one
way
and
then
we're
starting
to
work
on
accounts
and
events
indexes.
We
have
some
like
designs.
E
If
you
want
to
chit
chat
about
it,
we
have
some
interesting
things
to
really
speed
up
in
a
general
way
right,
we'll
have
some
bloom
filters,
sort
of
thing
that
cross.
You
know
hundreds
or
thousands
of
blocks,
so
that's
cool
next
thing
is:
we've
been
doing
a
lot
of
smaller
things
like
fixing
up
near
dynamic
data
source,
for
instance
like
doing
some
testing
on
ethereum
and
a
constant
stream
of
things
to
maintain,
but
that
aside
we're
working
on
the
fire
hose
sdk,
which
we've
been
discussing
in
the
past,
you
know
a
few
weeks.
E
I
think
that
converges
into
a
few
things,
namely
the
work
that
figment
has
done
on
the
dummy
chain,
so
that
someone
can
be
onboarded
quickly,
they
would
run
a
fake
blockchain
that
has
fire
hose
instrumentation.
We
can
show
that
to
implementers
see.
This
is
an
example
of
what
you
want
when
you're
a
fire
hose
and
then
a
dummy
fire
hose
with
some
stupid,
and
you
know
simpler
data
model,
and
then
hopefully
we
would
have
that
down
to
the
graph
node.
E
So
someone
could
start
a
fake
blockchain
and
have
the
whole
pipe
running
on
their
laptop
for
development
purposes,
but
also
for
learning
purposes
and
also
for
developing
new
features
because
oftentimes
we
don't
need
a
full.
You
know
blockchain
running
and
synced,
that's
a
big
burden
when
we
just
want
to
develop.
You
know
a
new
feature
in
the
fire
hose
or
fix
some.
I
don't
know
reliability
issues.
So
that's
the
thing.
A
A
D
Just
quick
something
dot
and
update
reminded
me
of
something
that
we
also
had
figment
high
three
software
engineers
since
we're
gonna,
be
enlarging
the
scope
of
work
doing
different
things.
So
we
recently
hired
also
three
engineers
for
figment.
A
G
E
Okay,
this
is
the
exact
process.
This
is
the
exact
expectation.
This
is
the
exact
you
know
piece
of
code.
You
need
to
respect.
This
is
the
interface
and
this
is
the
entry
point
to
the
graph
node,
and
this
was
you
need
to
so
we
haven't
built
that
yet
and
that'll
be
really
useful.
For
you
know
if
you
want
to
give
grants
to
some
other
foundations
that
will
implement
that
or
if
some
people
just
want
to
get
their
thing.
E
The
graph
ready
without
talking
to
us
well,
there's
there'll,
be
some
documentation
and
I'm
discovering,
while
you
know
the
figment
folks,
are
on
boarding
the
same
sort
of
technology.
That
is
helpful
for
them
to
learn
to
have
you
know,
pedagogic
explanation,
explanations
or
you
know
a
way
to
to
get
the
whole
model
in
their
mind.
So
that's
the
purpose
is,
I
think,
that's
going
to
be
really
useful.
Actually,
when
I
I
saw
dan's
work
there,
I
thought
man
I'd
like
to
have
that.
There's
a
lot
of
times
where
I
need
to
wait.
E
20
minutes,
because
I
want
to
boot
to
node
to
have
some
data
in
such
a
way.
I
can't
reproduce
now
we'll
have
a
fake
blockchain.
We
can
inject
some
code
to
fake
situations
that
we
would
not
normally
have
so
that's
cool,
sometimes
we're
waiting
for
an
event
to
happen
on
mainnet
a
fork
event.
For
example,
you
want
to
simulate
that
well,
we
could
we'll
have
a
place
to
to
to
do
sort
of
an
end-to-end
testing
engine.
That
is
much
more
lightweight.
D
Yeah
and
if
I
could
put
dan
on
the
spot
here
and
ask
him
like,
could
you
explain
a
little
bit
more?
What
is
the
dummy
chain
like?
Can
you
tell
us
a
little
bit
more
about
the
dummy
chain
and
then
can
you
hear
me.
H
Yeah,
can
you
guys
hear
me
now:
yep
yep,
all
right,
cool
yeah,
so
the
dummy
chain
project
was
kind
of
you
know
a
scratchy
on-age
kind
of
situation,
so
it
wasn't
started
as
like.
You
know
how
let's
build
this
thing.
It
was
primarily
around
issues
and
figuring
out
how
the
how
streaming
fast,
firehose
deck
works,
because
some
of
the
older
projects
they
essentially
have
you
know
different
components
and
they
have
some
history
going
back.
H
So
there's
a
lot
of
different
croft,
I
would
say
baked
into
it
so
and
when
you're,
starting
from
scratch,
you
kind
of
want
to
see
a
clearer
picture.
You
want
to
see
like
what
does
this
thing
do
what
about
this
one?
What
about
this?
One?
Why
they
all
in
the
same
place
like
how
they
talk
to
each
other
like
what
needs?
What
so,
during
initial
like
work,
I
found
it
was
kind
of
hard
to
to
work
in
a
lot
of
different
projects.
H
H
All
it
does
it,
it
starts
from
a
height
you
want,
like
you
say
you
want
to
start
from
block
1000
or
1
million
doesn't
matter,
and
it
would
progress
to
progress
forever
until
you
stop
it
and
you
can
specify
like
which
block
rate
you
want
to
have
this
node
progressed
with
and
it
it
does
have
like
a
local
state,
meaning
like
it
has
local
block
storage,
which
is
basically
just
json
files
in
the
directory.
H
I
mean
nothing
fancy
but-
and
they
also
mean
like-
has
a
metadata
information
like
where
it's
at
so,
if
you
like,
have
the
data
ported
over
to
different
machine
or
something
you
can
just
continue
from
there,
but
the
important
part
is
that
it
provides
just
a
few
basic
types.
So
the
types
are
like
blocks,
transactions,
events,
attributes
and
things
like
that,
so
they
all
baked
into
the
block
itself.
There
is
no
concept
of
validators.
There
is
no
concept
of
like
performance
or
anything.
It's
just
essential.
It's
like
hey.
H
You
know,
I
won't
just
have
basic
information
available
to
me
for
experiments
and
it
provides
already
protobufs
baked
into
so
meaning.
Like
you,
don't
have
a
you.
Don't
have
to
create
a
separate
repo
for
maintaining
those
protobufs
since,
like
each
of
them
will
be
needed
for
each
graph.
Note:
integration,
which
is
nice.
I
mean
it's
kind
of
you
have
everything
in
one
place
it
the
important
part.
Is
it
already
has
the
deepmind
or
instrumentation
baked
in
so
there
is
no
need
to
like
make
any
changes,
and
it's
fairly
straightforward.
H
Exactly
yeah
yeah
I
mean
it
has
like
some
concepts
of
like
node
storage
and
this,
but
essentially,
if
anyone
who
wants
to
create
some
more
complex
scenario
like
alex
mentioned
with
like
forks
and
such
you
can
definitely
do
it,
you
don't
need
like
to
mess
with
actual
chain.
It
doesn't
have
like
huge
compile
times.
It's
you
know
basically
go
build
and
you
have
it
so.
The
the
work
on
this
front
is
ongoing.
H
So
I've
made
a
couple
improvements
last
week,
so
the
the
chain,
the
dummy
chain
itself,
is
just
the
one
piece
and
the
second
part
was
the
starter
project.
H
So
the
starter
project,
it's
essentially
the
sf
dash
whatever
so
it's
kind
of
like
the
same
setup
setup,
the
command
line
tool
to
to
you
know,
index
data,
and
we
wanted
to
create
some
kind
of
a
template
where
it
could
be
used
like
for
experimentation,
new
features
or
it
could
be
used
as
a
starting
point
for
a
new
project
like
if
you're
building
for
a
host
tech
for
any
new
chain.
So
we
do
have
some
repos
and
we're
planning
on
like
sharing
those.
H
I
don't
know
like
the
timeline
for
this,
but
essentially
all
this
information
is
going
to
be
available
and
anybody
can
run
it
and
the
next
steps
obviously
is
like
integrating
into
ref
notes.
So
they
have.
You
have
essentially
end-to-end
tests
so
to
speak,
so
yeah,
that's
pretty
much
it
for
for
the
diamond
chain.
A
Guys
thanks
don
thanks
for
sharing,
so
we
had
two,
we
had
other
items
on
the
agenda
and
yeah.
We
did
change
the
order
a
little
bit.
Do
you
want
to
cover
the
new
graphql
api
features
you've
mentioned
before
you
want
to
go
deep
into
that,
and
then
you
can
have
line
change
demo.
F
Okay,
so
yeah,
so
graphical
validations
has
been
merged
in
terms
of
like
impact.
I
think
we're
already
saw
a
few
depths
that
were
impacted
by
this
change,
mainly
because
some
queries
were
probably
written
before
and
had
some
like
minor
semantic
issues
like
unused
variables
and
use
fragments
that
in
most
cases,
doesn't
really
affect
anything
related
to
the
execution,
just
like
semantics
errors,
but
this
is
also
part
of
the
graphql
validation
spec.
F
So
we
saw
a
few
like
rejects
and
issues
with
that
already,
so
rollout
will
happen.
I
guess
soon,
I
don't
know
adam
or
david,
maybe
can
help.
With
the
actual
plan
of
rolling
out
to
the
hosted
service,
we
managed
to
get
almost
100
coverage
of
the
validation
spec.
We
only
have
a
few
technical
limitations
with
the
the
graphql
parsing
library
that
we're
using
in
graph
node.
F
Those
are
not
super
critical.
It
is
not
at
this
stage
because
they
are
not.
These
are
rules
that
don't
really
affect
the
execution
of
graph
node
in
terms
of
migration,
so
charlie
from
our
team
started
with
writing
a
whole
migration
guide
like
what
are
the
most.
We
took
like
the
the
list
of
queries
that
were
affected
and
the
way
that
they
affect
users
like
the
one
that
were
mainly
failed.
So
we
tried
to
cover,
like
all
of
these
possible
errors
and
ways
to
address
it.
F
So
I
guess
the
the
most
annoying.
Let's
say
a
validation
error
is
the
overlapping
fields.
This
is
going
to
affect
some,
let's
say
execution
and
determinism.
So
it's
a
rule
that
I
guess
can
really
help
to
simplify
some
aspects
of
execution
in
graph
node.
F
Since
we
don't
really
need
to
do
some
assumptions,
we
can
just
start
with
the
execution
after
everything
has
been
validated
before
so
we
covered
most
of
the
the
issues.
I
I
don't
think
we
have
time
to
go
over,
like
all
the
affecting
the
rules.
A
F
F
Sahaja
from
our
team
created
like
a
small
cli
tool
that
you
can
just
run
as
a
cli
and
provide
your
subgraph
url
and
the
list
of
files
that
you're
using
and
like
the
actual
operations
and
what
this
does
basically
just
scans.
The
entire
code
base
finds
graphql
queries
and
then
tests.
These
queries
against
the
graphql
schema
that
is
coming
from
the
graph
and
the
output
is
just
like
the
same
as
you'll
get
in
production.
F
F
Adam
david.
Did
you
know
if
there's
like
a
specific
date
or
something
like
for
rolling
this
out.
B
Yep,
so
I
can.
I
can
talk
to
that,
so
the
approach,
because
we
have
a
bunch
of
other
functionality,
sort
of
depending
on
the
stuff,
like
I
think
in
the
immediate
term,
david
made
some
changes
which
will
sort
of
resolve
most
of
these
problems
sort
of
on
on
the
graph
node
side.
B
So
so
we're
able
to
essentially
roll
out
the
refactor
which
which
is
associated
with
without
without
breaking
users,
queries
and
then
yeah,
we're
working
with
other
folks
from
the
team
in
terms
of
reaching
out
to
apps
and
I'm
contacting
them
and
and
and
making
these
changes.
So
essentially,
validations
will
be
there
except
most
of
the
breaking
things,
will
kind
of
be
handled
sort
of
seamlessly.
But
then
we
do
want
to
move
to
a
point
where
we
do
require
proper
sort
of
well-formed
graphql.
B
F
Yeah
so
yeah
anything
that
we
can
help
to
make
it
easy,
even
like
working
with
actual
apps
to
see
how
we
can
scan
their
code
base
find
potential
issues.
We
would
love
to
help
and
also
like
improve
this
tool,
because
I
guess
this
could
really
help
others
yeah.
So
I
guess
that's
all
with
the
validations
I
mean
if
someone
wants
to
discuss
it
like
feel
free
before
I'm
jumping
to
the
next
one.
A
F
F
We
did
a
lot
of
work
on
like
filtering
to
make
it
easier
to
filter
the
data,
so
I
guess
with
the
next
version
of
graph
node
or
maybe
version
after
we'll
see
how
reviews
are
going.
The
plan
is
to
merge
a
few
pr's
and
we
already
implemented
filtering
by
change
block.
So
now
you
can
query
for
specific
entities
since
a
specific
block.
F
That
means
that
instead
of
querying
all
the
data,
so
today
we
have
filtering,
but
if
you're
filtering
you're
just
filtering
on
the
actual
level
that
you're
occurring
so,
for
example,
if
you
ask
for
a
root
type,
you
can
only
filter
its
fields
and
not
filter
the
root
type
based
on
the
child
types.
So
with
this
feature,
you
can
filter
the
actual
root
list
of
data
based
on
the
child
entities
that
are
linked.
F
This
should
simplify
some
aspects
of
querying
the
data
and
also
like
sorting
if
you're,
building
a
subgraph
that
has
more
than
one
entity
or
complex
entity
structure.
You
should
simplify
some
aspects
of
filtering
and
then
our
filtering
might
land
on
next
version
depends
on
like
performance
benchmarks
that
we'll
do
to
see
how
it's
going
to
be
to
behave,
but
some
aspects
of
ender
and
and
or
filters
is
going
to
land
as
well,
and
a
few
more
bug
fixes.
One
of
them
is
to
introduce
case
insensitive
search.
F
This
is
also
one
of
them
so
yeah.
These
are
all
the
new
filters
and
there's
also
going
to
be
a
change
or,
oh
sorry,
gone.
B
Yeah,
I
was
just
gonna
say
a
lot
of
these.
These
graphql
changes
have
been
quite
long,
requested
sort
of
changes
or
features
like
on
the
graphical
side,
but
we
haven't
been
able
to
get
to
them.
So
super
excited
to
see
a
bunch
of
these
some
quite
some
quite
old
issues
getting
resolved,
which
is
nice.
F
Yeah
yeah
and
if
we're
talking
about
like
the
actual
improvements
for
the
schema
itself,
we're
also
working
on
improving
pagination.
So
we
started
with
the
discussion
on
pagination
and
how
this
could
be
simpler,
and
then
we
saw
that
today
we
don't
really
implement
connections,
which
is
the
recommended
way
and
the
best
practice
for
implementing
pagination
and
also
like
filtering
for
entities.
We
think
that
there
is
a
way
to
introduce
that,
along
with
the
existing
api,
without
any
breaking
change
to
the
graphql
schema.
F
So
this
should
just
allow
users
to
query
data
and
get
cursors
and
filter
the
data
based
on
the
actual
page
and
cursor
that
they're
using
this
could
also
help
with
subscriptions.
F
I'm
not
sure
like
how
subscriptions
are
how
broad
the
usage
of
subscriptions
today
in
graph
node,
but
if
you're
doing
subscriptions,
usually
you
want
to
open
the
subscriptions
since
a
specific
change
or
specific
cursor.
So,
for
example,
you
want
to
get
you
want
to
subscribe
to
all
the
new
entities
since
a
specific,
cursor
or
entity
that
you
had
before.
So
this
would
be
possible
with
cursors
and
the
connection
based
pagination.
F
So
this
is
something
we're
working
on.
I
guess
this
will
take
a
bit
longer
to
achieve,
especially
because
of
like
nested
connections.
So
let's
say
you
asked
for
a
root
type
and
then
you
ask
for
nested
fields.
This
might
make
it
a
bit
more
complex,
but
this
is
something
that
we
can
definitely
achieve.
I
hope
that
this
will
make
pagination
a
lot
simpler
and
also.
F
Like
a
more
robust
way
of
identifying
the
cursor
that
you're
currently
looking
at
yeah,
so
all
of
these
changes
are
really
cool
and
nice,
but
the
big
thing
that
I
guess
is
coming
is
api
versioning,
where
the
goal
is
to
be
able
to
version
every
graphql
change,
either
on
the
schema
or
resolvers
and
allow
graphql
graph
node
to
support
multiple
versions
at
the
same
time.
So
users
can
either
specify
the
version
that
they
want
or
just
use
the
latest.
F
A
All
right
so
continuing
on
tooling
line
chain
has
also
created
a
clever
tool
to
debug
failed
sub
graphs
at
a
specific
block
number
without
having
you
to
sync
up
to
that
particular
block
number,
I
think
it's
pretty
cool
the.
I
think
we
can
link
the
recent
pr
to
graph
node,
but
we
have
victor
from
lime
chain
on
on
the
call.
So
maybe
you
can
see
it
in
action
already
like
you
want
to
share
your
screen
and
do
a
quick,
end-to-end
demo
on
the
tool.
A
Or
you
can't
talk,
let
me
see
if
I
can
on
mute
too.
G
Yeah,
okay,
so
great
yeah.
Let
me
share
my
screen
great.
G
Okay,
I
suppose
you
can
see
yes
yeah,
so
what
I
have
here
is
a
pretty
simple
subgraph,
which
is
basically
a
fork
for
from
the
examples
to
grab
on
the
graph
crossbow
github
repository.
I
have
two
handler
methods,
one
for
handling,
new
gravatar
events
that
simply
stores
the
gravatar
events,
and
I
have
a
new,
the
other
handler,
which
is
for
handling
updated,
gravatar
event,
which
basically
updates
an
already
existing
gravatar
or
a
gravatar.
I
have
stored
in
my
store
and
what
I
do
here
in
this.
G
If
statement
is
I
say
that
if
I
do
not
found
this
gravatar
that
I
need
to
update,
then
basically
I
should
not
be
able
to
reach
this
point.
So
I
just
look
critical.
Unexpected
routes
are
not
found.
I
return
from
the
function,
and
this
looks
basically
fine,
since
I
expect
to
never
enter
this.
If
statement
and
if
I
enter
this
statement,
I
said
it's
unexpected
behavior.
G
However,
when
I
deploy
this,
I
get
the
familiar
failed
subgraph.
Now
there
are
two
ways
to
go
from
here.
One
way
is
to
make
some
changes
and
redeploy
to
the
hosted
service
and
wait
all
the
way
to
block
six
million
and
blah
blah
blah
to
resync.
However,
I'm
not
gonna
do
that,
because
I
have
limited
time.
G
What
I
have
here
is
that
I
know
already
that
I've
reached
block
six
million
and
blah
blah
blah
blah,
so
I'm
gonna
say
I
want
to
start
from
this
block,
and
I
do
this
in
the
familiar
way
by
having
start
block
six
million
and
basically
the
block
on
which
I
failed
now
I
will
also
run
a
local
graph
notes
and
I
run
it
in
the
familiar
way.
G
However,
I
also
give
an
additional
argument,
which
is
a
fork
base
which
is
url,
and
what
this
says
to
my
local
graph
node
is
okay,
I'm
going
to
fork
a
subgraph
from
the
hosted
service
and
by
fork
I
mean
I'm
going
to
use
the
up-to-date
store.
I
already
have
on
the
hosted
service
and
I'm
going
to
fetch
this
fork
by
id.
So
that's
why
I
have
subgraphs
slash
id
and
yeah.
I've
run
this
note
already
now.
G
What
I
can
do
here
is,
I
can
deploy
it
with
the
usual
deploy
command.
However,
apart
from
ipfs
and
not
urls,
I
also
have
to
give
my
debug
fork,
which,
in
my
in
this
case,
is
simply
the
id
of
the
sub
graph,
so
it
would
be
this
one,
I'm
going
to
copy
paste
that
run
it.
Oh,
I
had
to
create
first,
the
subgraph.
Let
me
create
it.
G
And
this
will
this
will
run
oops.
G
What
I
can
create,
however,
is
now
what
I
can
do
is
I
can
inspect
my
code
here
and
I
can
see
that
I
have
this.
I
have
made
this
silly
mistake
of
the
eye
of
the
id
I
converted
here
for
to
in
32.
However,
I
saved
it
as
a
hex,
so
I
need
to
fix
that
to
hex,
and
now
I
can
redeploy
again
using
the
debug
fork
to
save
to
basically
mitigate
the
cost
of
waiting
to
sync
up
to
the
block.
A
G
Yeah
you
can
as
fork
base.
You
can
give
it
basically
pretty
much
any
url
that
points
to
some
subgraph
graph,
your
endpoint,
that
fetches
entities
from
the
subgraph
store.
Actually
this
taking
a
bit,
I
think
I'm
having
some
trouble
with
my
ethereum
rpc.
G
Okay,
but
I
can
answer
a
question
in
the
meantime.
This
is
a
bit
unexpected
rpc
failure
here,
but
if
you
have
any
questions
I
can
answer
them.
A
B
When
we
had
first
talked
about
it,
the
use
case
was
quite
sort
of
narrow
in
the
it
was
basically
just
talking
about
just
talking
about
like
trying
to
debug
this
one
block
that
that's
classic
situation,
where
your
subgraph
hits
a
block
and
then
has
a
failure
for
some
reason
you
hadn't
anticipated.
We
wanted
to
try
and
improve
and
speed
the
debug
process
to
get
out
of
that.
B
But
actually
I
don't
know
if
people
have
used
the
hard
hat
forking
or
like
the
sort
of
fork
behavior
that,
like
ethereum
dev
tools
have
which
lets
you
sort
of
fork
mainnet
to
your
local
blockchain.
This
essentially
does
the
same
thing
for
a
local
subgraph,
which
is
like
just
really
powerful,
and
it
does
support
that
one
block
debugging
situation,
but
also
lets
you
do
things
like
you
could
like.
If
you
wanted
to
fork
mainnet,
do
some
local
contract
development.
B
You
could
also
fork
a
main
subgraph
at
the
same
block
and
and
and
have
that
pointed
at
your
forked,
local
blockchain
as
well,
so
actually
just
potentially
really
improves
the
really
improves
the
sort
of
developer
experience
for
iterating
on
subgraphs,
rather
than
always
having
to
think
back
through
history
or
not
being
able
to
do
it
in
certain
scenarios.
G
Yeah
yeah,
it's
essentially
it's
a
pretty
general
topic.
You
know
in
a
sense,
you
can,
if
you
have
any
remote
sub
graph
somewhere,
that
has
a
synced
up
state
to
somewhere,
even
if
it
has
not
failed,
you
can
always
fork
from
it
and
like
build
on
top
of
it
built
on
top
of
its
state.
So
it's
yeah.
As
you
said,
it's
expands
more
than
just
the
debugging
use
case.
We
have
now.
A
G
Yeah
yeah
there
will
be,
there
will
be
a
short
video
tutorial
for
it,
and
there
is,
I
mean,
like
this.
Pr
is
not
yet
merged,
but
it's
going
to
be
pretty
soon
but
there,
and
there
is
also
a
supplemental
docs
for
for
the
whole
process,
which
will
be
also
uploaded
to
the
graph
docs
yeah.
A
Okay,
thank
you
all
right,
thanks
cool,
so
we
have
10
minutes.
We
still
want
to
cover
the
gip002,
but
in
the
meantime
this
might
be
a
good
segue
and
I'm
looking
at
adam
and
matt.
With
the
recent
discussions
we've
been
having
around
good
graph,
node
dev
and
testing
experience,
not
sure
if
that's
something
you
want
to
cover
right
now.
Do
you
want
to
speak
to
that
adam
mark?
Where
else
has
been
discussing
this.
B
So
I
can,
I
can
talk
to
sort
of
the
context
of
it
briefly,
which
is
that
I
guess
we've
gone
over
the
last
sort
of
six
months
from
having
sort
of
one
team
working
on
sort
of
graph
node
a
lot
to
having
a
a
lot
of
teams.
As
the
number
of
core
devs
has
increased
and
what's
been
interesting,
there
has
been
the
different
teams
coming
in
and
their
different.
I
guess
development
workflows
for
sort
of
iterating
and
testing
features
as
they
were
at
like
adding
them
to
graph
node.
B
So
I
think
it's
we've
had
a
couple
of
sessions
where
we've
talked
about
it,
essentially
just
because
we
don't
want
to
sort
of
suddenly
have
be
trying
to
support
three
different
types
of
workflows
if
we
can,
but
I
think
matt
can
maybe
talk
to
their
approach,
which
is
to
sort
of
have
a
battle
battleground
type
setup,
which
was
in
a
preview
like
previously
in
the
separate
repo
was
starting
to
bring
some
of
that
into
graph
note
itself.
B
I
think
dotan
was
like
also
looking
to
bring
in
more
like
a
local
blockchain
type
environment,
to
support
those
kind
of
things
so
yeah.
I
don't
know
if
you
guys
want
to
talk
about
sort
of
yeah,
your
experience
and
and
and
how,
like,
how
you're
bringing
some
of
that
actually
into
the
core
code
base.
I
Yeah,
I
can
start
briefly
so
the
main
pain
point.
I
When
we
joined,
we
were
mostly
the
first
new
core
dev
team
joining
the
graph
protocol
and-
and
we
faced
the
problem
that
we
had
like
adam
said
that
we
had
kind
of
issues
actually
running
graph
node
and
not
running
it,
but
more
having
kind
of
really
sub
graph
that
we
can
test
different
part
of
the
graph
node
so
that
that
at
that
time
we
created
what
we
call
the
graph
graph
node
dev
repository,
which
contains
script,
to
make
it
easier
to
actually
just
deploy.
I
Subgraph
and
everything
and
and
dotan
from
the
gill
actually
opened
the
pr
to
add
art
at,
and
at
that
point
we
we
started
discussing
about
what
we
could
do
to
actually
improve
the
the
dev
experience
and
from
there
we'll
see-
and
probably
I
think,
one
of
the
main
conclusion
that
when
we
had
the
initial
discussion
was
that
we
want
to
bring
more
in
graph
note
directly
so
that
it
benefits
everyone
and
that's
the
main
goal
just
make
it
easier
for
people
to
develop
on
graph
note
than
having
everything
already
ready
for
them
to
to
start
and
not
pass
like
two
hours,
just
finding
a
good
sub
graph,
finding
an
ethereum,
node
and
everything.
F
F
So
we
started
like
a
local
project
that
is
very
similar
to
what
you
did
and
then
we
tried
to
just
put
it
as
part
of
the
docker
image,
like
the
docker
compose
configuration
for
dev
time,
and
I
I
think
the
work
that
you
did
is
amazing
and
can
be
reused
if
we
just
find
a
way
to
make
it
let's
say
simpler
or
stage
based
like
based
on
what
your
actual
needs.
B
Yeah
and
then
I
think,
one
thing
which
which
also-
and
maybe
stefan
can
talk
if
he's
on
the
call,
I
think
my
story
was:
oh-
maybe
he's
dropped
now.
I
know
there
is
in
terms
of
actually
starting
to
have
some
more
test
runs
sort
of
configured
in
in
graph
note
itself,
which
essentially
essentially
rerun
like
a
given
subgraph
over
a
given
block
range.
B
So
then
we
can
actually
see
if
the
poi
is
changing
and
then
also
some
like
quite
initial,
like
performance
performance
things
as
well,
and
it's
cool
to
see
those
sort
of
running
automatically
in
github.
I
think
steven's
not
able
to
unmute
himself
where's.
A
J
Okay,
there
you
are.
This
is
why
I'm
really
proud
of
what
we
did
thanks
guys
yeah.
This
is
really
getting
in
place.
This
tooling
is
getting,
and
just
the
the
thing
that
we
just
saw
with
the
fork
and
all
of
this
I
think
this
can
converge
as
long
as
well
with
the
the
flow
improvements
of.
J
Sorry,
the
the
the
the
fake
sorry,
the
dummy
chain
and
all
of
this
it
is
it's
really
we're
really
building
a
tool
set
that
that
is
needed
for
automation
and
faster,
faster
iteration
for
all
the
teams
and
getting
the
feedback
from
the
figment
team
on
the
the
framework
and
then
the
requirements
for
the
framework
helped
us
a
lot.
You
know,
rethink
like
you
know.
Oh,
we
know
this.
This
part
was
hard,
but
now
we
see
kind
of
how
it's
seen
from
the
outside.
J
So
yes,
so
specifically
for
the
the
test
run
thing
it's
now,
you
know
it's
not.
J
It
can
now
be
run
in
github
actions
and
it
will
be
the
baseline
for
some
performance
testing
so
just
running
on
small
chunks,
comparing
what
we
can
do
with
the
the
fire,
the
in
the
fire
hose
with
the
index
on
the
account
anything
that
we
do,
we
will
be
able
to
test
performance
on
real
data,
so
these
yeah,
the
these
we're
just
starting
to
to
feel
that
these
tools
are
already
they're
now
part
of
the
our
tool
set
so
yeah,
that's
a
general
impression
of
it,
but
yeah
the
the
pr's.
J
A
Yeah
yeah
yeah.
Let's
I
mean
we
did
start
late,
but
let's
at
least
try
to
cover
gip0023.
I
know
we
have
ariel
also
here
just
to
break
down
a
little
bit.
K
K
Okay,
so
the
this
this
is
based
on
a
feature
that
the
community
is
discussing.
I've
been
discussing
for
for
some
weeks
a
month
transfer
the
sub
graph,
creating
the
network
using
an
nft
that
the
the
the
owner
can
keep
and
transfer
to
different
addresses.
This
is
very
useful
if
you
are
like
creating
an
nfc
sorry
created
as
a
graph
and
then
transferring
it
to
a
multi-stick.
Let's
say
the
the
update
is
the
we
like.
K
I
wrote
an
original
proposal
called
jp
18
and
the
way
it
worked
was
like
turning
the
dns
into
the
nft
contract
like
by
be
an
upgrade,
and
we
would
have
two
contracts
right:
the
ga,
the
gns
nft
contract
and
the
descriptor
that
is
rendered
in
the
nfc.
But
we
identified
some
issues
when
doing
that
in
internet
in
winkiev,
along
with
the
with
the
team,
particularly
openc,
etherscan
and
and
other
like
scanners,
were
having
issues
to
identify
that
the
gns
were
upgraded
and
now
it's
721.
K
So
I
created
a
new
proposal
that
is
jp
23,
that
the
idea
is
to
like
it
was
a
refactor
of
the
original
implementation,
where
we
have
the
nft
contract
being
something
different
than
the
gnf.
So
it's
more
of
like
using
composability.
K
The
gns
now
use
an
nfp
ess
721
contract
that
instead
use
the
descriptor
to
render-
and
this
is
this-
is
something
well.
This
is
a
sort
of
a
description
of
the
process
right.
This
is
something
already
implemented.
It's
been
audited,
it's
deployed
on
mainnet,
but
not
upgraded
and
we've
been
testing
implementation,
investment,
okay,
it
was
shared
in
the
forum.
The
gp
had
been
discussed
there
and
now
it's
in
the
process
of
being
presented
and
discussed
through
governance,
the
council
and
and
all
that
all
that
process.
K
Once
the
council
decides
to
to
vote
it
positively
or
maybe
like
asking
new
questions,
etc.
Eventually,
it
will
get
upgraded
okay,
so
we
are
in
that
stage
of
the
process
like
giving
feedback
to
the
council
on
getting
the
take
the
council
to
decide
on
the
change.
Okay.
A
Yeah
and
I
think
we're
right
on
time,
so,
let's
wrap
it
up
here
and
we'll
reconnect
in
one
month
from
now
again
we'll
have
to
retreat
next
week.
So,
hopefully,
we'll
have
more
juicy
updates
and
interesting
stuff
to
cover
here
in
a
month,
so
stay
tuned
guys
thanks
thanks!
Thanks
for
joining
and
I'll
see
you
around.