►
From YouTube: The Graph - Core Devs Meeting #9
Description
Core Developer Meeting #9 discussing updates within the protocol
0:00 Intro
1:28 E&N Updates
9:00 Figment Updates
11:35 StreamingFast Updates
15:00 The Guild Updates
16:58 LimeChain Updates
The Graph's Media:
Twitter: https://twitter.com/graphprotocol?s=20
Instagram: https://instagram.com/graphprotocol
LinkedIn: https://www.linkedin.com/company/theg...
Website: https://thegraph.com
A
Okay,
yeah
thanks
for
joining
this
is
our
ninth
tenth
meeting,
which
is
a
nice
number
as
we're
closing
the
year.
As
always
a
quick
reminder,
all
of
these
calls
are
recorded,
so
you
always
find
them
in
our
youtube
channel.
A
Before
we
hear
from
core
devs
on
the
latest
updates,
I
just
wanted
to
share
what
we're
in
that
we're
in
the
process
of
revamping
not
only
these
calls,
but
also
how
we
share
all
of
these
core
r
d
updates,
with
all
of
you
guys
so
moving
forward
you'll
have
access
to
recorded
like
idoc
meetings,
the
different
teams
are
are
having,
as
well
as
what
the
canonical
roadmap
looks
like
and
the
different
formalized
working
groups
we
have
here.
A
So
we
will
be
organizing
all
of
these
using
the
foundations
notion
workspace,
so
it
should
all
be
fairly
accessible
to
all
of
you.
This
monthly
call
specifically
will
then
be
used
as
a
way
to
share
and
discuss
updates
as
we'll
do
today.
Those
will
be
coming
from
different
working
groups,
as
major
milestones
are
hit,
basically,
so
do
stay
on
the
lookout
for
future
updates,
we'll
post
everything
into
the
forum
cool.
A
I
think,
can
we
start
with
engine
node
janus?
I
think
that's
the
first
letter
we
have.
B
Yeah,
of
course,
I'll
share
my
screen
just
to
walk
you
through
the
notes
I
have
here
yeah,
so
all
that
right
in
we
shipped
recently
a
new
rust
gateway
that
replaces
the
old
one
that
was
written
in
typescript.
B
It
has
much
better
throughput,
so
it
should
be
able
to
carry
at
the
very
at
least
hosted
service.
You
know
amount
of
amounts
of
volumes
and
you've
all
seen
the
charts
on
twitter
about
the
traffic
we're
seeing
there,
and
so
this
should
be
able
to
handle
that
easily.
B
Also,
there
have
been
a
number
of
index
selection,
reliability
improvements,
so
you
know
primarily
to
improve
the
the
client
or
consumer
experience
when,
for
instance,
there's
faulty
indexers,
and
then
we
fall
back
to
others
in
better
ways
and
that
we
also
score
them
more
dynamically.
So
we
react
more
more
more
quickly
to
changes
in
the
environment.
We've
also
been
new
graph,
node
releases
and
new
index
releases,
and
especially
yeah.
B
Both
of
those
really
are
super
important
and
if
there's
any
indexes
on
the
call
today,
I
urge
you
to
update
your
indexer
service
at
least
and
graph
node.
I
think
the
next
asian
the
same
way,
the
biggest
change
that
that
I
personally
I'm
I'm
excited
about
and
interested
in
is
the
gip
20..
B
This
is
untestable
indexer
responses
has
been
released
as
part
of
these
versions
and
and
that's
why
it's
so
important
to
update,
because
this
is
a
way
that
we
can
in
the
gateway
detect
when
certain
indexes
have
issues
that
are,
for
instance,
due
to
a
corrupt
database
or
something
else,
and
if
we
can
detect
this,
that
means
a
and
the
indexer
doesn't
it
doesn't
get
used
for
that
particular
query
result
and
isn't
on
the
hook,
for
you
know,
attestations
for
problematic
results
that
they
themselves
probably
don't
have
confidence
in,
but
don't
know
that
something's
up
and
also
means
that
we
can
detect
these
from
the
consumer
perspective
and
fall
back
to
other
indexers
and
who,
hopefully,
don't
have
the
same
corrupt
database
right.
B
B
I
know
it's
not
like
query
syntax
errors
that
graph
node
detects
or
something
like
that
or
like
schema
mismatch
arrows,
but
it's
really
errors
that
result
from
something
being
broken
on
the
indexer,
and
so
this
is
super
important
because
it
also
increases
the
reliability
upon
in
the
network.
So
yes,
please,
please
update
and
please
read
the
release
notes.
B
They
will
have
a
lot
of
information.
What's
currently
in
progress
on
edgy
node
side
is
that
we
are
planning
what
we
call
fire
data
sources.
There
is
a
gip
out
for
that.
I
believe
in
the
forum
as
well,
so
this
is
essentially
replacing
what
was
previously
ipfs
cat.
Basically,
so
it's
supposed
to
support
file
storage
networks
like
ipfs
or
rweave,
and
basically
segregates
out
the
files
that
we
fetch
from
those
networks
as
part
of
subgraph
indexing
from
data
sources
like
ethereum
contracts
or
near
contracts.
B
B
B
The
first
ones
that
we
will
be
working
on
most
likely
are
on
the
trigger
side,
pipelining,
the
block
stream,
so
pipelining
the
filtering
for
relevant
triggers
and
so
that
we
don't
do
that
in
sequences
or
block
ranges.
But
we,
you
know
pipeline
those.
Those
block
ranges,
there's
also
work
on
going
to
make
the
fire
hose
work
for
ethereum
or
make
it
work
well
for
ethereum.
B
So,
specifically,
one
area
that
we're
looking
into
is
filtering
the
pre-filtering
information
that
the
firehouse
passes
to
graph
node
in
on
the
firehouse
side
and
providing
those
filters
and
that
are
subgraph
specific
on
from
graph
node
and
to
just
make
sparse
sub
graphs
or
block
ranges
where
there
is
no
information
and
just
you
know,
faster
and
not
pass
all
the
blocks
over
to
graph
node
and
then
on
the
store
side.
B
We're
looking
into
pipelining
writing
entity
data
back
to
the
store
just
so
that
when
you
are
done
with
the
block
and
you
you
want
to
write
the
entity,
changes
that
will
not
that
doesn't
block
processing
and
we
can
continue
indexing
additional
blocks.
While
we
are
also
queuing
the
rights
up,
there's
also
an
investigation
into
using
copy
versus
insert
statements
which
are
supposedly
faster.
I
think
newer.
B
Postgres
releases
also
have
a
built-in
pipelining
mechanism,
and
so
we're
looking
at
a
few
of
those
kind
of
improvements,
slightly
bigger
feature
that
we're
pondering
to
introduce
are
immutable
entities.
A
lot
of
cases.
You
know
where
you,
for
instance,
have
transfers
or
something
you
create
a
transfer
entity
and
that
never
changes.
B
So
that
would
make
a
lot
of
things
simpler
and
faster
in
graph
node,
when
you,
when
you
know
the
entities,
never
change
yeah
last
but
not
least,
also
started
working
on
previously
talked
about
as
integration,
testing
or
or
proof
of
indexer
cross-checking
or
dispute
analysis.
This
is
now
kind
of
all
rolled
into
one.
One
effort
which
I
think,
like
indexer
cross-checking,
is
a
bit
more
appropriate
for
so
basically
have
a
way.
A
system
to
continuously
cross-check
indexing
results
and
query
results
across
different
environments,
and
those
could
be
indexes.
B
B
Data
from
indexing
subgraphs
could
also
be
run
in
a
network
mode
where
all
the
indexes
in
the
network
are
cross-checked
or
it
could
run
in
a
peer-to-peer
mode
that
different
indexers
can
run
and
they
can
configure
other
indexers
to
collaborate
with
like
get
detailed
information
from
to
health
debugging
or
like
detecting
discrepancies
in
the
data
on
the
query,
results
and
just
to
give
them
more
confidence
in
that
everything
is
running
fine
on
their
side.
B
A
Dennis
right
on
time,
if
you
have
five
minutes
for
each
that
would
be
yup.
That
would
be
great
ass
figment.
Joseph,
are
you
on
the
call
here.
C
All
right
so
yeah,
I'm
gonna,
start
with
some
challenges
right
now
that
we
were
having
the
first
one
is
around
the
performance
for
syncing
all
their
blog
heights,
and
this
is
something
right
now.
It's
current
like
a
task
on
the
research,
so
we
have
it
here
under
challenge,
but
at
the
same
time
we
have
it
on
the
next
steps.
So
we
are
currently
trying
to
find
some
strategies
and
doing
some
research
about
how
we
could
actually
increase
the
performance.
C
I
think
this
is
going
to
be
like
a
room
for
collaboration
with
the
other
kordev
teams
as
well,
and
we
faced
some
issues
related
to
the
merger
and
we
created
the
tool
actually
to
verify
the
integrity
of
this
merged
file.
So
we
have
it
here
achievement.
So
this
is
like
a
challenge
that
was
really
resolved
and
so
for
everyone
here
on
the
score.
Everything
that
we're
doing
right
now
is
around
integrating
tenderman.
C
So
we
are
close
to
the
final
steps
right
now.
So
there's
a
lot
of
finalization
and
some
bug
test
the
fixing
and
testing
and
on
the
active
side
we
are
implementing
right
now,
the
tendermint
within
the
graph
cli
and
wrapping
the
last
pieces
of
the
extractor
package
and
those
are
like
the
few
last
steps
for
integrating
tendermint
and
we
were
able
to
finish
the
event
trigger
refactoring
and,
as
I
said,
we
fixed
a
lot
of
bugs
and
improved
the
and
did
some
updates
on
the
investor
stack
and
we
are
running
right
now.
C
The
entire
multi-node
setup.
That's
the
ingester
relayer
merger
fire
halls
everything.
The
plan
is
to
keep
it
running
for
two
weeks
now
to
be
able
to
test
it
and
check
if
there's
something
malfunctioning
and
if
we
run
into
some
problems
and
yeah.
That's
it
like
for
for
everything
that
we're
doing
currently
and
the
next
step
is
actually
the
deployment
of
the
firehouse
tag,
keep
testing
it
and
create
a
full
full
subgraph
which
we're
going
to
be
using
actually
also
for
our
testings.
A
D
So
so,
let's
do
that,
so
we
have
done
a
lot
of
work
on
solana
in
the
past.
You,
you
guys
know,
there's
been
a
release
of
the
near
stuff.
This
is
continuing
a
little
bit
processing
the
net,
the
test
net.
We
put
a
pr
out
to
update
the
latest
data
scheme
up
there,
so
near
should
be
more
advanced
and
ready
to
be.
You
know
more
used,
so
everyone
out
there
if
you
want
to
have
near
indexing,
try
it
out
go
ahead.
Now
is
the
time
we've
done.
D
A
lot
of
solana
work,
but
also
we're
continuing
that
work
got
more
hope,
there's
a
lot
of
things
that
need
to
be
managed
there,
and
we
might
have
some
discussions
today.
If
we
have
the
chance
about.
You
know
how
we're
going
to
share
that.
If
no
everyone
wants
to
synchronize
from
genesis,
which
is
pretty
hard,
that's
one
thing.
I
would
like
also
to
call
to
all
the
people
who
are
indexers
that
have
not
yet
spun
up
as
fire
hose.
D
Please
do
if
you
want
to
be
the
kingpin
in
the
graph
run,
a
fire
hose
and
start.
You
know
start
doing
that
work.
So
we
can
accelerate
know
the
the
performance
improvements
downstream
for
subgraph
developers
get
involved
in
that
that's,
it's
pretty
cool
and
you're
going
to
be
really
highlighted.
We're
gonna
have
a
little
board
there
with
your
face
or
something
when
you,
when
you're
one
of
some
of
the
first
to
do
that,
we've
been
tackling
some
bsc
issues,
a
lot
related
to
the
node.
So
that's
been
annoying.
D
You
know
people
confusing
the
graph
subgraphs
and
the
nodes.
Hopefully
we
can
bring
the
firehose
for
bsc
soon,
so
we
get
that
performance
boost
that
comes
from
having
it.
D
So,
if
you're
interested
to
be
a
more
kingpin,
do
that
with
bsc
satisfied
with
the
demand
we're
working
right
now
on
a
lot
of
improvements
to
the
like,
like
janis,
told
us
about
the
performance
and
the
sparseness
of
fossil
we're
implementing,
transforms
right
now
for
solana,
which
is
required
and
then
ingraining
that
for
acceleration
of
sparser
queries
or
larger
speed
filtering
on
whole
histories.
D
So
that's
approximately
it
right,
oh
yeah,
so
the
team
is
also
working
to
make
sure
that
call
handlers
on
ethereum
work
well
with
the
firehose
and
doing
some
comparison
stuff.
So
that's
going
to
be
cool.
Okay!
I
don't
want
to
take
too
much
time
here.
We've
been
a
little
bit
of
a
collaboration
with
figment,
which
I'm
really
happy
we're
going
to
continue
on
doing
and
and
reviewing
the
the
gip
coming
out
of
there.
The
tendermint
work.
A
Yeah
yeah
amazing.
We
might
have
some
time
to
properly
discuss
that
today.
So
cool
thanks,
alex
okay,
yeah,
there's
a
lot
of
teams
there,
I'm
counting
seven
on
this
call,
which
is
crazy,
and
we
don't
have
everybody
here.
So
I'd
like
to
take
also
the
time
to
shout
out
to
the
guild.
Okay,
also
with
a
new
team,
that's
been
actively
working
on
subgraph
features.
A
E
Yeah
sure
so
I'll
start
with
a
short
introduction,
so
I'm
botan,
I'm
a
member
of
a
group
of
developers,
called
the
guild.
E
We're
a
group
of
open
source
developers
mainly
focused
around
graphql,
we're
also
part
of
the
graphql
foundation,
we're
maintaining
the
reference
implementation
of
graphql,
and
we
help
with
the
maintenance
of
the
graphql
spec,
we're
also
working
on
like
tools
around
graphql
like
graphicals
and
a
few
more
and
we're
building
tons
of
open
source
around
graphql
like
graphql
mesh
graphical
cogen,
inspectorius,
leans,
tons
of
more
tools,
yeah
we're
super
happy
to
join
and
be
a
part
of
this
huge
thing
we
just
joined
like
a
few
officially
like
a
few
days
ago,
just
the
contract
signed
yeah.
E
I
won't
take
much
time.
I
have
a
very
quick
update
on
what
we're
doing
we're
taking
like
baby
steps
on
learning
graph
note
and
everything
related
to
it,
we're
trying
to
compare
and
learn
from
the
implementation
based
on
our
experience
and
knowledge
from
the
graphql
ecosystem.
That
is
not
rust,
specific
yeah.
I
have
a
few
more
here
like
the
few
more
updates,
the
biggest
thing
we're
going
to
work
on
soon.
I
guess
will
be
api,
versioning
and
composition
of
subgraphs
yeah.
E
That's
all
from
super
excited
to
be
part
of
part
of
the
theme.
Thank
you
better.
A
Amazing
thanks
dalton,
it's
good
to
know.
We
have
a
new
team
focused
on
graphql
stuff.
These
guys
know
what
they're
building
for
sure,
so
graphic
com
position
will
be
a
major
milestone,
hopefully
in
the
upcoming
months,
cool
thanks
a
lot.
Lastly,
we
also
have
lime
chain.
You
know
live
chain.
Lime
chain
has
been
working
very
closely
with
us.
There's
a
lot
of
new
tools
coming
out.
Specifically,
we
have
a
new
one
called
the
subgraph
debug
tool.
I
don't
think
many
core
devs
know
exactly
what
this
tool
does.
A
So
might
be
a
good
time
to
talk
a
little
bit
about
this.
Do
we
have,
I
think
I
saw
yulia
on
the
chat
or
petco.
I
don't
know
guys.
Can
you
update
us.
A
F
Yes,
perfectly
hi,
pickle,
cool,
yeah
zoom
doesn't
seem
to
give
me
feedback
on
the
sound
yeah.
So
basically
we
we
are
the
team
behind
the
matchstick
unit
testing
framework,
and
I
think
I
spoke
on
the
last
core
dev
meeting
about
matchstick
a
bit
but
yeah.
What's
what's
really
what's
sort
of
the
big
news
right
now
is
this
debug
tool
that
was
mentioned?
It's
it's
also
called
a
subgraph
forking
tool.
F
Unfortunately,
our
teammate,
the
developer,
who
was
mostly
involved
with
integrating
this
into
graph
note,
is
not
here,
but
I
can
just
give
a
brief
explanation.
So
the
whole
idea
behind
the
debug
tool
was
that
really
often
a
subgraph
will
fail
for
some
reason.
After
it's
been
deployed
and
after
it's
synced
a
few
million
blocks,
then
at
some
point
it
will
fail
and
in
order
to
actually
check,
if
so,
in
order
to
get
the
subgraph
up
and
running
again,
you
would
need
to
fix
what
you
think
is
the
issue.
F
If
you
have
the
right
logs
and
then
you
would
deploy
again
as
a
subgraph
developer,
and
you
would
hope
that
you
had
fixed
the
issue,
but
then,
of
course
the
subgraph
usually
starts
syncing
from
the
beginning,
so
from
the
block,
zero,
which
is
not
really
good,
because
it
can
take
a
really
long
time
to
actually
see
if
your
fix
has
done
the
job.
So
what
this
subgraph
debug
tool
will
allow
the
supergraph
developers
to
do
is
it
will
basically
run
their
mappings
using
the
store
from
the
failed
subgraph.
F
So
essentially,
it
will
just
run
locally,
just
one
block,
just
the
block
that
the
that
the
subgraph
failed
on
and
it
will
use
and
it
will
use
the
store-
that's
already
in
the
failed
subgraph
store
right,
because
the
subgraph
was
syncing
just
fine
until
now,
and
it
is
the
most
recent
block,
that's
that
actually
is
causing
the
issue
so
that
way,
subgraph
developers
can
get
feedback
on
if
they've
managed
to
fix
the
issue
really
fast.
F
So
that's
the
main
thing
that
we've
done,
the
debug
tool
has
been
finalized
and
it
is
actually
part
of
graph
node,
and
I
can
maybe
share
here
in
the
chat
or
on
the
discord,
some
more
information
about
that.
F
F
I
guess
one
of
the
one
of
the
more
important
new
features
is
that
matchstick
will
now
have
first
class
support
for
docker,
because
initially
we
tried
going
with
the
approach
of
having
binaries
for
a
lot
of
operating
systems
already
generated
and
so
that
the
subgraph
developers
can
use
the
compiled
binaries
when
running
matchstick.
But
that
didn't
turn
out
to
be
the
best
idea,
because
a
lot
of
subgraph
developers
reported
issues
locally.
F
So
that's
why
we
think
that
the
best
way
moving
forward
is
to
actually
use
docker
right
out
of
the
box.
Of
course,
if
the
sub
developer
wants
to
do
so,
they
can.
They
have
three
options:
basically
using
the
compiled
binary
using
docker
and
building
matchstick
for
themselves
locally
and
then
using
that
binary
cool.
So
yeah,
I
guess
that's
the
that's
the
most
important
thing
and
in
the
future
we
will
be
researching
a
card
hat
plugin.
F
But
I
cannot
give
more
information
about
that
since
it's
very
early
stages
for
now
and
of
course,
we
will
be
working
on
stabilizing
more
features
in
matchstick,
but
yeah,
that's
basically
it.
I
will
send
more
information
about
everything
that
I
just
spoke
about
in
the
chat.