►
From YouTube: The Graph's Core Devs Meeting #7
Description
The Graph’s Core Devs Meeting #7
This video was recorded: Tuesday, September 7 @ 8am PST, 2021.
Sections:
0:00 Intro
1:30 Engagement in the Community
21:26 Initial Curation MEV Analysis
35:50 Forum Proposals
58:10 Outro
The Graph's Media:
Twitter: https://twitter.com/graphprotocol?s=20
Instagram: https://instagram.com/graphprotocol
LinkedIn: https://www.linkedin.com/company/theg...
Website: https://thegraph.com
A
Hello,
everyone
welcome
to
the
core
desk
meeting
number
seven.
Today
we
will
start
experimenting
with
a
bit
of
a
different
format
than
what
we've
done
in
the
past.
We
wanna
get
more
engaged
in
our
discussions
and
therefore
we
have
not
prepared
sort
of
a
formal
agenda
or
any
presentation,
but
we
keep
it
conversational.
We
have
posted
in
our
forum
three
discussion,
topics
that
we
will
cover
today,
the
first
one
will
be
for
figment
and
they
will
be
talking
about
their
latest
forum
proposals
and
looking
to
engage
for
feedback
from
the
community
here.
A
The
second
one
is
moderated
by
sam
and
brandon
we're
going
to
be
talking
about
the
latest
forum
posts
around
proposals
to
enhance
the
curation
experience
and
then,
lastly,
we
have
bubble
tea,
which
is
a
grantee,
and
he
has
created
a
python
library
and
is
going
to
show
us
progress
that
he's
made
on
that
and
what
it's
all
about.
B
B
So
before
we
join
the
graph,
we
were
actually
doing
some
research
about
how
we're
going
to
be
indexing,
more
blockchains
and
integrating
our
solutions,
and
to
that
we
had
published
two
forum
posts
about
our
previous
research,
which
summarize
actually
what
we're
trying
to
achieve
and
what
we're
trying
to
do
and
talks
a
little
bit
about
our
solutions,
and
we
hope
that
you
have
read
it
and
have
some
questions.
We
are
actually
looking
for
some
feedbacks
and
some
questions
to
start
the
conversations
see.
How
do
you
guys
feel
about
it?
B
C
I've
read
your
proposals,
I
have
a
say
a
comment
or
I
would
like
to
make
sure
I
understand
something
well
to
see
how
this
could
be
integrated
or
not
within
the
ecosystem.
So
it
is
my
understanding
that,
in
the
model
that
you
are
guys
are
proposing,
the
manager
system
relies
on
a
postgres
database
and
and
does
some
indexing
work
right.
C
D
Hey
alex
so
like
up
to
our
plenty
of
earlier
conversations
about
that.
We
first
of
all
approached
this
problem.
As
from
from
the
perspective
of
also
the
company
who's
done,
the
indexing
work
and
like
we
tried
to
propose
as
much
as
the
most
reliable
solutions
that
you
can
get
to
be
able
to
very
easy,
have
a
very
reliable
solutions
and
index
next
chains
more
reliably
and
to
be
able
to
extract
the
information
in
the
proper
way.
So
from
the
extraction
part.
We've.
Also.
D
D
D
Subgraph
developer
does
not
necessarily
need
to
know
the
quirks
of
the
network
and
the
very
network
like
protocols
to
fetch
the
data.
He
would
just
use
the
interface
that
he's
greatly
familiar
with
so
graphql
and
to
achieve
and
to
bind
those
two
places
together.
You
need
to
have
some
form
of
an
api
that
allows
you
to
to
to
query
the
data
that
you
would
like
to
actually
receive
as
a
subgraph
developer.
D
This
is
why
we
need
some
form
of
a
data
store
that
is
unprecised
in
any
of
our
proposals.
It
might
be
a
postgres
database
or
it
might
be
a
file
based
storage
that
we
would
like
by
the
manager
process
to
share
using
the
api,
and
that
sub
graph
would
be
able
to
query
that
process
for
the
next
batch
of
information.
We
also
prefer
plenty
of
different
optimizations,
like,
for
example,
subscribing
to
do
filtering
done
in
the
manager.
D
So
if
we
have
a
very
basic
structures
and
the
very
basic
structure
has
a
few
relations
between
each
other
like
transactions
and
accounts,
this
num
small
number
of
data
pieces
has
to
be
linked
together
and
form
some
form
of
index.
We
can
do
that
because
we
know
that
this
would
be
this
10
or
20
structures
that
we
need
to
link.
It
would
be
much
faster
than
actually
you
know,
write
all
this
subgraph
runtime
to
run
the
code,
because
you
could.
D
You
could
as
well
just
run
that,
but
we
cannot
do
that
because
we
don't
know
what's
inside
every
smart
contract.
This
is
why
this
is
how
I
see
the
sub
graphs
as
of
today,
and
I
believe
that
this
was
one
of
the
things
why
subgraph
even
existed.
However,
I
might
be
wrong,
so
maybe
adam
can
clear
that
out.
C
I
would
like
to
isolate
a
little
bit
and
we
had
some
of
these
discussions
together.
Lukasz
I
mean
I'd
like
to
have
also
the
input
from
other
people.
It's
not
just
you
and
I
here
please,
but
I
see
that
the
fact
that
there's
a
graphql
query
to
a
place
which
can
navigate
relation
is
a
query
time
indexing
and
that's
to
me.
That
is
a
query
time.
You
know
a
lookup
of
indexes
that
to
me
is
the
role
of
a
sub
graph
right.
C
C
I
I
still
have
an
issue
having
to
query
a
database
that
goes
and
navigates
through
indexes
because
we're
trying
to
index
there
needs
to
be
a
layer
where
there's
no
indexes
or
very
little
so
that
we
can
have
a
lot
of
you
know
parallelism
there
a
lot
of
throughput,
but
is
not.
That
also
does
not
require
cpu
memory
and
large
systems
running
thinking
the
files
approach
and
then
have
the
thing
you
guys
are
offering
as
a
graph
network,
sorry
of
a
network
subgraph
to
be
literally
a
network
subgraph.
D
Yeah,
so
the
thing
that
we
offer
is
network
graph,
steve
set
is
a
very
initial
piece
of
the
data
that
you
take
data
to
construct
a
subgraph.
If
we
like,
I
can
start
talking
about
the
jp,
13
and
14
that
you,
if
you
haven't,
read
yet
like
you're,
very
much
welcome
to
and
personally.
E
And
so
you
know,
if
I
understood
correctly,
you
know
what
what
alex
was
getting
at
is
like.
You
have
an
indexing
process
that
is
putting
data
into
like
an
intermediary
representation
that
then
lives
in
postgres
and
then
and
then
subgraphs
are
indexed
on
top
of
that.
And
so
you
have
multiple
layers
of
of
data
representations.
E
You
know
that
kind
of
layer
on
top
of
one
another,
and
you
know
I
think
it
would
be.
It
would
be
helpful,
for
example,
to
discuss
like
your
approach
to
indexing
kind
of
in
isolation
to
everything
else.
So
you
know
how
you
plan
on
getting
data
out
of
the
underlying
blockchain
networks,
whether
that's
by
rpc
or
whatever,
and
then
put
that
into
a
database
and
and
like.
F
So
it
feels
to
me,
like
you
know,
this
is
a
pretty
detailed
engineering
and
architectural
discussion,
and
you
know
I
think
you
know
people
need
a
lot
of
context
to
be
able
to.
You
know,
provide
input
and
also
kind
of
like
process.
These
types
of
you
know
design
discussions,
and
so
I
wonder
if
you
know
this
is
the
best
format
for
that
type
of
discussion.
It
feels
to
me
like
this
is
the
best.
You
know
a
thing
that's
best
done
in
like
focused
working
groups,
you
know
as
working
meetings.
F
I
know
we've
already
had
you
know
several
of
those,
and
I
think
we
just
need
to
continue.
That
process
is
kind
of
my
my
feeling
from
this.
D
Yes,
I
totally
agree
with
you.
I
kind
of
didn't
expect
that
much
detailed
question
from
from
alex.
So
this
is
why
I
I
tried
to
did
my
best
and
answered
that
like
it
would
be
great
to
emphasize
that
we
haven't
assumed
plenty
of
of
that
stuff
that
if
alex
said,
we
don't
assume
postgres,
we
process
data
because
we
had.
We
believe
that
the
best
way
to
integrate
other
other
like
integrate
next
networks,
that
will
will
come
to
the
graph
ecosystem
by
a
common
interface
and
to
use
common
interface.
D
D
One
of
such
is,
for
example,
network,
maintainability
and
subgraph
maintainability
in
few
years
that
you
don't
need
really
to
run
that
no
network
node
in
in
two
years
to
actually
be
able
to
query
that
and
like
this
is
one
different
approach
that
we've
taken.
That
is
completely
different
from
the
firehouse
and
there
are
plenty
of
that.
So
I
highly
encourage
you
to
to
look
at
this
jps.
I've
sent
both
links
to
the
to
the
channel
here.
B
Yeah,
just
a
small
note
on
today's.
What
what
I
was
actually
trying
to
achieve
from
today
that
we
had
posted
two
forum
posts
on
the
forum
and
we
didn't
get
any
questions
feedback
so
today,
objective
was
actually
to
converge,
the
attention
of
the
community
on
that,
and
maybe
we
can
get
some
complete
feedback
and
questions
on
that
and
not
get
too
much
into
the
technical
details.
So
I
guess
to
really
really
say
it
once
again:
please
community,
if
you
can
read
our
posts
and
have
some
questions
feedbacks.
B
That
would
be
great,
but
I
think
that's
that's
all
from
us
for
now.
Thank
you.
G
I
think
I
think
one
high
level
question
I'd
have
that
might
just
help
position
this
in
people's
minds
is:
are
these
proposals
kind
of
intended
as
a
alternative
to
the
fire
hose,
or
do
you
see
it
at
like
the
fire
hose
being
the
other
proposals
proposed
by
streaming
fast?
Or
do
you
see
this
as
something
that
would
kind
of
complement
the
fire
hose,
as
in
being
a
part
of
the
data
pipeline
and
sort
of
serves,
perhaps
complementary,
but
but
largely
disjoint
purposes?.
B
Yeah,
that's
an
excellent
question.
Brandon
right
now
we
are
in
communication
with
alex
and
seeming
fast
to
see
if
we
could
make
both
solutions
works
and
we
are
actually
trying
to
see
how
we
could
maybe
fit
the
two
solutions
to
actually
be
working
together
and
maybe
have
some
communication
between
the
two
solutions,
but
maybe
that
won't
be
achievable
as
well
and
we're
gonna
try
to
see
like
which
solution,
which
makes
more
sense
on
which
network.
So
maybe
each
solution
could
live
on
different
set
of
networks.
G
B
H
G
I
would
hate
to
see
disjoint
solutions
on
different
networks
per
as
personally
as
part
of
the
graph
community.
One
thing
I'll,
I
just
add
into
the
conversation
that
might
be
worth
considering
as
part
of
this
like
breakout
working
group,
is
the
role
of
subgraph
composition.
G
So
that's
something
that's
been
discussed
in
the
community
for
a
long
time.
Both
composition,
where
you
have
subgraphs
sort
of
relating
to
other
subgraphs
at
query
time,
but
then
also,
you
know,
what's
been
called
subgraph
pipelining.
G
You
know
where
you
can
have
sub
graphs
where
the
event
handlers
and
the
mappings
are
basically
consuming
data,
that's
triggered
from
other
subgraphs,
and
that
sounds
very
very
similar
to
you
know
what
you're
describing
with
this
kind
of
network
subgraph-
and
you
know
in
a
world
of
subgraph
pipelining
it
wouldn't
you
wouldn't
even
really
need
to
have
an
either
or
between
like
what
you're
describing
and
sort
of
using
the
fire
hose
at
the
extract
level,
because
you
know,
basically,
you
could
create
a
network
subgraph
on
top
of
the
on
top
of
the
fire
hose,
and
then
you
know
through
just
using
the
sort
of
generic
framework
around
subgraph
pipeline.
G
You
know
you
could
someone
could
choose
to.
You
know,
use
that
as
part
of
their
pipeline
or
they
could
choose
to
consume
data
directly
from
the
fire
hose.
You
know,
perhaps,
for
you
know,
better
performance
by
you
know
reduce
the
steps
in
the
pipeline,
so
these
things
may
not
be
as
odd
with
one
another.
As
you
know,
it
might
seem
to
to
kind
of
out
outside
folks
listening
to
the
call,
but
it's
worth
thinking
through
these
things
holistically.
F
Yeah,
I
think
it
would
be
worth
maybe
finding
some
agenda.
I
don't
know
what
the
right
form
would
be.
You
know
it
could
be
just
with
the
core
devs
or
I
mean
a
call
like
this-
maybe
to
go
over
some
parts
of
the
road
map-
things
like
sub-graph
composition,
data
pipelines,
things
like
that,
so
that
people
can
kind
of
fit
that
into
their
their
view
of
the
solution.
Space.
C
I
think
it'd
be
really
great
to
have
some
discussions,
maybe
brandon,
if
you
have
thought
through
these
things
or
again,
if
you
know
to
the
the
dynamics
of
such
composition,
I
see
many
issues.
You
say
that
and
there's
race
conditions
there's
a
lot
of
issues
I
see
from
maybe
it's
unfounded,
but
I'd
be
really
curious
to
bounce
some
ideas,
because
right
now
from
the
get-go,
I
see
that's
more
risky.
C
I'd
like
to
push
like
in
here,
I
put
a
in
the
comment
box,
a
response
by
zach
to
the
parallelism
thing
which
which
showed
me
in
a
way
that
I'm
lacking
in
understanding
and
my
team,
perhaps
the
other
guys
are
better
than
I,
but
but
I'm
lacking
in
understanding
the
the
constraints
that
we
want
or
need
to
put
on
systems
to
make
sure
things
are
deterministic
and
the
sort
of
you
know
environment.
We
want
to
provide
developers
and
in
that
light,
until
I
understand
that
properly,
I'm
wondering
is
that
going
to
be
possible?
C
F
That's
right,
I
think,
that's
another
topic
for
like
a
presentation
that
we
need
to
do,
because
there
are
a
lot
of
design
constraints
that
we've
basically
had
in
our
minds.
You
know,
since
we
started
the
project,
you
know
almost
four
years
ago
and
I
think
it's
important
to
try
to
like
communicate
what
those
core
design
constraints
are
so
that,
as
those
new
teams
coming
in
it's
something
that
really
everybody
understands
deeply
and
it's
and
and
this
stuff
is
also
you
know-
we've
got
you
know
the
graph
protocol.
F
B1
that's
live
now
that
uses
you
know
the
you
know,
arbitrator
and
the
the
dispute
system
for
security
and
then
there's
a
v2
right
that
we're
working
towards
that.
We
haven't
spoken
a
lot
about
yeah,
but
that's
also,
you
know
something
that
we
need
to
kind
of.
You
know
design
for
make
sure
that
the
things
that
we
build
work
with
you
know
an
updated
security
model.
F
So
it
is
something
we
should
put
on
the
books
and
why
don't
we
speak
to
the
foundation
folks
to
get
something
scheduled
there,
and
I
think
it
would
help
to
have
somebody
like
zach
give
a
presentation.
You
can
make
it
public,
and
that
way
you
know
everybody
that's
working
on
you
know.
Different
features
can
can
be
keeping
in
mind
the
the
design
constraints
required
for
for
security.
H
F
H
Yeah
go
ahead,
yeah
just
on
that
we've.
Obviously,
we've
got
quite
a
long
working
session
tomorrow
tomorrow,
the
next
day,
and
I
don't
think
yeah.
This
is
on
the
call,
but
one
of
the
like
initial
things,
we're
covering
which
we
just
skipped
out
yesterday
was
some
of
the
constraints
that
we
have
so
in
our
minds
that
we've
sort
of
documented
so
yeah.
H
So
we
can
cover
them
in
that
session,
but
I
think
we
should
it's
also
probably
worth
pulling
them
together
into
a
more
yeah
like
consumable,
presentation
or
otherwise,
for
the
wider
community
too.
I
Thanks
oliver
all
right,
hi
everybody-
this
is
going
to
be
an
initial
mev
analysis
of
create
curation.
I
So,
as
of
today,
about
a
million
grt
has
been
extracted
by
what
we
have
labeled
mev,
curation
med
and
as
a
maybe
a
refresher
or
introduction.
I
I'm
going
to
explain
how
bonding
curve
designs
basically
set
up
mechanisms
for
pump
and
dump
schemes
so
in
bond
in
bonding
curves.
Basically,
you
have
some
some
shape.
It
could
be
exponential,
it
could
be
linear,
but
the
main
point
of
or
a
common
property
of,
bonding
curves
is
that
the
earlier
people
earlier
people
who
get
in
get
to
buy
shares
cheaper
than
later
people
who
buy
shares.
I
So,
for
example,
the
the
first
share
in
a
bonding
curve
is
going
to
cost
less
than
the
10th
share
in
a
bonding
curve,
and
so
here
here
what
I'm
showing
is
actually
the.
If,
in
this
plot,
I'm
showing
the
number
of
outstanding
shares
and
when
we
have
one
outstanding
share,
the
cost
of
the
next
share
is
going
to
be
here.
So
this
is
just
a
qualitative
graph,
and
once
we
get
to
10
out
standing
shares,
the
cost
of
the
next
year
is
going
to
be
higher
and
so
on.
I
So
how
do
we
use
this
for
a
pump
and
dump
well
a
a
a
pumper?
So
how
do
we
use
a
for
pump
and
dump
four
in
the
graph
for
curation?
So
a
pumper
is
going
to
buy
some
amount
of
shares
which
is
represented
by
this
shaded
area,
and
then
what
happens
is
that
the
pumper
waits
for
honest
curators
and
perhaps
other
pumpers
to
come
in
behind
them
and
buy
more
shares
at
a
higher
cost,
all
at
higher
cost.
I
After
some
point
in
time,
the
pumper
sells
the
exact
same
number
of
shares
at
a
much
higher
price.
So
the
thing
that
that's
the
key
point
here
is
that,
even
though
the
pumper
bought
at
this
price,
when
they're
when
they
sell
they
sell
at
the
current
price,
and
that's
how
I
that's-
how
pump
and
dump
is,
is
made
possible
with
bonding
curve
designs.
F
I
Yes,
thank
you
yeah
yeah,
so
the
yeah,
when
I'm
when
I'm
saying
pumping
up
well,
we'll
see
we'll
see
the
impact
on
the
honest
curators
in
a
in
a
moment.
I
I
I
So
I
want
to
note
that
it
can
be
difficult
to
distinguish,
pump
and
dump
from,
let's
say,
honest
curators,
who
may
may
jump
into
a
subgraph
and
then
I
don't
know
stumble
out
just
because
of
fear
or
confusion
or
what
or
whatever
so
anything.
You
know.
I
can't
tell
what
the
difference
you
know
I
can't
tell
between
that
and
bots.
So
anybody
who
does
that
anybody
who
goes
in
and
out
within
two
minutes
of
the
subgraph
is
getting
called
pump
and
dump
a
pump
and
dump
curator.
I
So
this
these
numbers
we're
looking
at
63
subgraphs
in
the
metric
in
the
statistics,
I'm
sure
going
to
be
showing
you
so
far.
We've
measured
626
pump
and
dump
events
by
201,
unique
pump
and
dump
curators.
I
I
And
within
the
first
two,
if
you
look
across
all
sub
graphs
and
you
just
analyze
within
the
first
two
minutes
of
the
subgraphs
being
launched,
then
78
percent
of
the
curation
events
are
being
labeled
by
this
definition.
This
this
buy
and
sell
definition,
they're
being
labeled
as
pump
and
dump
activity.
I
And
when
we
look
at
the
statistics,
if
we
look
at
the
the
total
numbers
we
have
within
the
two-minute
launch
window
across
all
sub-graphs,
we
have
four
million
pump
and
dump
curation
in
five
million
pump
and
dump
curation
out
with
about
a
pr
profit
of
around
a
million
grt.
All
these
numbers
are
grt
and
we
have
1.5
million
grtn
by
honest
curators.
So
these
are
curators
that
have
not
sold
with
that
they've
only
bought
within
the
two-minute
launch.
They
have
not
sold
within
the
two-minute
launch.
I
I
These
are
the
most
important
numbers,
I'm
going
to
show
you
some
visualizations
that
we
made
during
an
earlier
analysis.
So
I'm
going
to
show
you
some
plots.
These
plots
are
going
to
show,
buy
and
sell
and
hold
activity
on
various
sub
graphs,
so
the
top
of
the
plot
we're
going
to
see
the
the
name
of
the
sub
graph.
I
Red
means
that
someone
bought
and
sold
with
a
loss
and
great
a
gray
dot
means
that
they're
holding
and
these
numbers
some
of
these
labels,
I'm
not
going
to
go
talk
too
much
about
the
labels,
but
slim
chance
and
some
chance
and
graph
god
helped
me,
and
it
gave
me
some
some
curator
ids
that
were
suspected
of
being
pump
and
dump
or
bought
bot
related
okay.
So
this
is.
These
plots
are
just
to
these
plots
are
over.
I
These
are
beyond
a
two
minute
time
span,
but
I
I
had
already
had
them
built
and
I
just
thought
they
were
interesting,
so
I
thought
I'd
show
them
to
you
all.
I
It
helps
visualize
how
some
of
the
profits
and
losses
the
impact
of
of
these
of
the
buy
and
sell
activity
on
others.
So
here
we
had
a
a
curator
buy
at
this
point
and
then
they
sold
at
a
much
higher
profit
and
they
made
all
that
profit
because
another
curator
came
in
behind
them
and
increased
the
value
of
the
shares
that
they
bought
and
then
they
sold
at
a
much
much
higher
level.
And
then
you
see
the
same
person.
Well,
they
didn't
they
sold
later
too,
and
they
also
made
profit.
I
How
did
they
make
profit?
Well,
an
honest
curator
came
in
after
them
and
bought
shares
and
increased
the
the
current
price
on
the
bonding
curve,
which
allowed
them
to
also
sell
at
a
profit.
I
I
So
I
guess
it's
kind
of
nice
in
this
case,
because
the
first
first
curator
here
they
paid
for
setting
up
the
they
paid
the
cost,
the
gas
cost
of
setting
up
the
bonding
curve,
which
you
know,
there's
an
initialization
cost
for
the
first
curator.
So
it
actually
saved
this
person.
Some
money
in
this
case.
I
Okay:
here's
a
here
is
a
more
active
one.
There's
a
we
see
a
kind
of
this
bloodbath
with
all
this
red
lots
of
profit
and
loss
being
made
over
a
long
long
time
span.
We
have
it's
hard
to
see
with
these
plots,
but
we
have
a
lot
of
it
looks
like
we
have
some
pump
and
dump
activity,
although
I
I
haven't
zoomed,
I'm
not
zooming
in.
I
I
Okay,
so
this
is
these-
are
some
overall
statistics
of
the
now
this
now
I'm
going
back
to
the
pumping
dump
purely
pumping
dump
statistics,
here's
a
histogram
of
the
realized
profits
of
pump
and
dump
curators,
and
we
see
that
most
of
them
are
making
nothing
with
their
activity,
which
the
the
fact
that
most
of
them
are
making
nothing.
So
the
median
is
nothing
which
makes
me
think
that
there
is.
There
may
be
a
significant
amount
of
activity
where
people
are
just
stumbling
in
and
out
of
curation.
I
G
I
think
to
I
guess
to
martin's
point
and
maybe
sam
tears
earlier,
it's
tough
to
assess
the
underlying
behaviors,
and
even
you
know
just
from
looking
at
the
on-chain
data,
what
the
actual
real-world
sequencing
of
events
are.
G
So
I
suspect
many
of
these
things
that
are
being
labeled
pump
and
up
or
in
some
cases,
they're
sort
of
like
ap
and
in
behavior
and
in
fact,
that's
kind
of
what
we've
heard
from
curators
in
the
community
and
then
the
other
sort
of
variant
here
is
most
likely
sandwich
attacks,
which
are
you
know,
a
class
of
front
running
so
where
it's
not
actually
the
attacker
getting
in.
First
to
like
pump
up
the
curve
to
like
draw
in
honest
curators.
G
It's
it's
that
they're
able
to
front
run
the
transactions
of
honest
curators
that
have
already
been
sent
or
that
they
know
will
be
sent
because
the
subgraph
was
just
deployed
and
then
immediately
exit
after
that
curation
is
in.
But
all
we
can
ever
observe
directly
is
is
the
sort
of
on-chain
sequencing
of
events.
You
know,
we
don't
know
the
real
world
we
can
infer,
but
we
don't
know
for
sure
the
real
world
sequencing
of
actions.
H
However,
for
now
we
haven't
found
any
flashbot
usage
by
those
suspected
bots,
so
it
seems
that
for
now
no
one
is
using
advanced
attacker
tools.
I
Are
you
saying
that,
in
response
to
the
center
term.
B
Did
you
collect
any
statistics
about
the
initial
amount
of
curated
for
these
pop-up
numbers?
Yes,
like
what
is
the
median
amount
created.
B
I
I
Joseph
I
I
can
calculate
that
and
I'll
I'll
get
that
to
you.
B
Yeah,
because
something
I
was
thinking
about,
is
the
return
ratio.
So
if
someone
wants
to
run
this
kind
of
strategy,
if
they
are
the
early
creators,
if
they
create
like
let's
say
something
about
1k
grt
and
then
following
them
as
10k
grt,
the
return
ratio
could
be
I'm
saying
just
like
a
random
number
20x.
But
if
they
create
10k
and
they've
been
followed
by
10k,
the
return
range
could
be
less
much
more
or
less
so
a
strategy
could
be
to
create
first,
like
a
small
amount
that
could
get
you
a
higher
return
ratio.
A
B
A
Good
segue,
maybe
to
switch
gears
into
the
forum
proposals
that
we
have
and
yes
brandon,
has
shared
with
us
here
lately
and
over
the
last
week,
brandon.
Why
don't
you
you
know,
go
ahead
and
share
with
us
what
your
thoughts
is.
F
G
I'll
give
you
video
too
yeah,
so
there's
a
few
conversations
in
the
forum.
This
is
kind
of
a
plug
for
folks
to
to
go
check
these
out.
First,
I
want
to
give
a
shout
out
for
to
derek
aka
data
nexus
and
and
orion
aka
slim
chance
that
I
think
drove
a
lot
of
the
initial
conversations
in
the
forum
and
inspired.
G
You
know
some
of
these
follow-on
mechanisms,
so
I
think
there
was
one
around
dynamic,
curation
tax,
which
I
recommend
also
checking
out
before
I
get
into
the
actual
mechanisms.
I
just
want
to
take
a
step
back
and
just
you
know
acknowledge
that
the
purpose
of
curation
wasn't
to
create
a
casino,
for
you
know
front-running
bots,
to
try
and
profit
off
of
other
front-running
bots.
G
You
know
just
to
recap:
you
know
one
of
the
primary
mechanisms
is
it
for
for
it
to
be
a
communication
mechanism
between
sub-graph
developers
and
indexers,
that
you
know
they
intend
to
direct
query
fees
towards
the
sub
graphs,
that
they've
deployed
and
to
basically
attract
indexers
to
index
those
subgraphs
so
that
they
can
bootstrap
their
dapps
and,
and
you
know,
run
their
applications
and
for
the
non-subgraph
developer
curators,
you
know
their
their
role
is
kind
of
as
this
extra
layer
of
prediction
to
predict.
G
You
know
if
the
sub
graphs
that
have
been
deployed,
which
ones
will
become
the
most
valuable
and
should
be
indexed
and
them
participating
in
the
network,
also
improves
the
capital
efficiency
of
subgraph
developers,
because
now
subgraph
developers
they
can
get
an
exit
so
to
speak.
They
can
you
know
once
the
rest
of
the
community
realizes
that
hey.
This
is
a
valuable
subgraph
to
signal
to.
They
can
actually
remove
their
initial
signal.
G
You
know,
which
frees
that
capital
up
to
do
other
things
to
you,
know
improve
their
dap
or
the
the
graph
ecosystem,
as
it
may
be.
So
having
the
curation
mechanism
have
this
sort
of
activity.
You
know
front
running
or
sandwich
attacks
or
you
know,
ap
n
definitely
hurts
the
network
because
it
makes
it
more
unpredictable
for
sub-graph
developers
to
use
the
mechanism
as
well
as
honest
curators.
G
H
I
think
that
you
you
it's
on
the
it's
on
the
one
that
changed:
the
reserve
ratio
from
the
bonding
from
the
gns
and
the
creation
like
that.
One.
G
G
This
is
something
we've
heard
from
subgraph
developers
that
I
just
want
to
call
out
as
being
important,
which
is
that
the
way
that
the
gns
works
right
now,
which
is
kind
of
this
higher
level
contract
that
wraps
the
the
core
curation
mechanism
at
the
base
protocol
layer,
is
that
when
you
upgrade
a
sub
graph,
it
migrates
all
the
sub
graph
as
all
the
signal
from
the
previous
subgraph
to
the
next
subgraph
in
one
step,
and
it
does
that
because
the
alternative
would
expose
the
subgraph
developer
to
front
running
right
if
they
incrementally
moved
their
signal
from
the
previous
subgraph
to
the
next
one,
then
they'd
be
subjecting
themselves
to
get
front
run
on
their
own
subgraph,
which
we've
already
seen
that
type
of
activity
take
place
for
for
different
reasons
and
there's
a
protocol
upgrade
scheduled
for
that
specific
reason
that
maybe
oliver
will
we'll
talk
about
later.
G
But
this
is
bad
right.
So
the
fact
that
subgraph
developers
have
to
migrate
it
all
in
one
step
means
that
in
some
cases,
indexers
may
be
incentivized
to
stop
indexing.
The
previous
version
before
the
next
version
is
fully
indexed
and
so
figuring
out
a
generalized
way,
or
at
least
you
know
in
specific
context,
to
mitigate
the
effects
of
front
running
also
enables
us
to
add
better
n
minus
one
support
for
subgraph
developers,
because
they
be
able
to
leave
safely,
leave
some
signal
behind
without
worrying
that
they'll.
G
You
know
they'll
basically
lose
out
on
the
opportunity
to
to
signal
on
their
own
subgraph.
So
I'm
going
to
give
a
brief
overview
of
the
solutions
that
are
in
the
forum.
I
don't
know,
maybe
oliver
when
you
get
a
chance,
you
could
post
and
chat
just
how
much
time
I'm
I'm
allotted
for
here,
but
the
there's
three
different
proposals
here.
The
first
is
what
I
call
a
decaying
capital
gains
tax
in
the
curation
market
and
what
this
effectively
does
is
very
intuitively.
G
It
diminishes
the
opportunity
to
make
profits
over
very,
very
short
time
periods,
and
so
many
of
you
who
trade
you
know,
equities
or
other
assets,
might
be
familiar
with
the
concept
of
capital
gains
tax.
You
know,
the
idea
is
that
when
you
buy
an
asset,
you
track
the
cost
basis
of
that
asset.
G
G
You
know
the
span
of
several
blocks
as
sam
was
showing
with
those
you
know
that
sort
of
vector
you
know,
field
visualizations
in
his
analysis,
is
that
the
majority
of
that
profit
would
be
taxed
away,
so
profits
that
were
earned
over
the
course
of
a
block
should
be
taxed
close
to
100
percent
and
in
order
to
not
impose
a
deadweight
loss
on
the
curation
mechanism
as
a
whole.
That
tax
is
actually
returned
into
the
reserves
of
the
of
the
bonding
curve.
G
G
The
tax
is
actually
returned
to
the
bonding
curve,
so
the
honest
curators
that
you
know
that
are
the
ones
who
this
pro
on
the
backs
of
which
this
profit
is
being
made,
actually
that
that
value
is
returned
to
them,
and
so
it
becomes
less
harmful
for
honest
curators
to
participate
in
the
mechanism
while
simultaneously
becoming
more
or
excuse
me
less
profitable
for
attackers
to
participate
in
the
mechanism.
G
The
big
downsides
of
this
approach
are
extra
bookkeeping,
so
you
need
to
keep
track
of
both
the
time
basis
and
the
cost
basis
and
there's
also
just
some
questions
around.
You
know
whether
this
should
exist
at
both
the
core
protocol
layer
and
the
gns
just
one
or
the
other,
and
how
these
things
would
compose.
G
So
there's
some
some
complexity
there
that
still
needs
to
be
evaluated.
Ariel,
has
been
taking
a
look
at
a
number
of
these
proposals
to
kind
of
go
a
little
bit
deeper
and
also
evaluate
the
gas
costs
ariel
being
a
engineer
at
edge
edunode,
the
next
one.
This
was
actually
a
proposal
put
forward
by
orion
some
chance
in
the
forums
was
called
the
subgraph
showroom.
G
G
Block
science,
they
did
some
work
on
what
were
called
augmented,
bonding,
curves
and
so
both
of
those
continuous
organizations
they
had.
This
thing
called
an
initialization
phase:
block
sciences,
augmented
bonding
curves.
This
is
from
back
in,
like
2017
2018.
They
had
this
concept
of
like
a
hatching
phase,
and
the
idea
is
that,
as
we've
seen
in
our
protocol,
bonding
curves
are
very
sensitive
to
initial
initialization
and
decommissioning
conditions
right.
So
you
know
what
we've
seen.
G
This
is
feedback
that
we've
gotten
from
curators
in
the
network
is
that
when
a
new
subgraph
gets
deployed,
the
overwhelming
incentive
is
just
to
curate
immediately,
regardless
of
whether
you've
had
time
to
actually
evaluate
the
quality
of
the
subgraph
or
not,
which
you
know.
If
you
recall,
the
purpose
of
the
curation
mechanism
is
to
predict
the
quality
of
these
subgraphs
and
the
reason
that
there's
such
an
overwhelming
incentive
to
curate
immediately
is
because
the
you
know
the
first
one
in
as
sam
showed
in
the
diagram
gets.
G
You
know
the
cheapest
price
and
then
the
price
increases
somewhat
dramatically
after
that,
as
a
function
of
as
a
function
of
you
know
the
amount
of
shares
that
have
been
minted
in
that
bonding
curve.
What
the
initialization
phase
does
is.
It
says
for
some
period
of
time
after
a
subgraph
is
deployed,
we
will
keep
the
price
uniform,
essentially
a
flat
bonding
curve.
G
So
anyone
that
wants
to
get
in
or
out
of
that
bonding
curve
during
that
initialization
phase
is
guaranteed
a
uniform
price,
and
so
what
that
means
is
all
the
you
know,
all
the
pump
and
dump
for
sandwich
attacking
or
ap
in
an
activity
that
you
know
sam
was
showing
it
just.
It
wouldn't
be
profitable
because
you
would
be
buying
and
selling
shares
in
the
bonding
curve
at
exactly
the
same
price.
So
the
pr,
the
you
know,
there's
a
zero
profit
condition.
G
Basically
in
that
in
that
phase,
something
that
I
added
to
the
the
original
proposal
around
the
showroom
was
having
a
an
initialization
exit
phase.
G
So
I,
the
original
proposal
just
had
the
sort
of
initialization
phase
where
you
had
the
uniform
price,
but
it's
important
to
not
have
a
discontinuity
in
the
reserve
ratio
of
the
bonding
curve
because
you
might
end
up
in
a
condition
where
the
price
gaps
upwards
quite
dramatically
over
a
very
short
period
of
time,
and
so
in
those
cases
you
actually
get
back
running
right,
where
the
first
person
to
exit
the
bonding
curve
is
going
to
make
an
outsized
profit
relative
to
all
the
other,
honest
curators
in
the
curve,
and
so
you
again
you'd
get
you
know
some
variation
of
people
racing
for
the
exit.
G
You
might
see
people
use
flash
spots
in
the
future,
so
you
don't
want
to
create
this
large
discontinuity
in
price.
That's
what
the
exit
phase
is
for,
and
the
idea
is
that
you
gradually
change
the
reserve
ratio
from
a
flat
bonding
curve
to
the
steeper
target
reserve
ratio
that
we
have
in
the
protocol
today.
Just
to
refresh
everyone's
memories.
That's
a
reserve
ratio
of
one
half
which
effectively
amounts
to
like
a
a
a
linear,
a
linear
function,
the
last
so
real
real
quickly.
G
Actually,
before
you
get
to
the
last
proposal,
I'll
mention
that
both
of
these
proposals
link
to
a
pro
prototype
in
observable
hq.
This
is
like
kind
of
an
online
jupyter
notebook
except
it's
written
in
javascript.
Here
I
I
kind
of
have
some
a
base
class.
You
know
that's
a
simple
bank
core
bonding
curve.
You
know
you
can
expand
all
these
cells.
G
You
see
the
implementation,
and
then
I
created
a
series
of
mixins
to
to
show
the
effects
of
these
different
proposed
designs,
the
flat
deposit
tax,
that's
actually
something
that
we
have
already
in
the
protocol.
So
that's
just
implemented
here
as
a
mix-in.
Here's,
the
initialization
phase,
implementation
and
the
decaying
capital
gains
tax.
One
excuse
me
here.
G
I
don't
have
time
to
go
end
all
the
implementation
details,
but
I'll
just
kind
of
note,
the
the
structure
for
now
and
then
there's
also
a
simple
simulator,
so
I
basically
created
a
way
to
declaratively
run
simulations
around
different
types
of
activities
in
the
bonding
curve.
So
we
have
this
little
simul
simulation
harness
and
then
we
have
a
definition
of
these
different
attack
scenarios.
So
here
I'm
showing
what
I
call
a
sandwich
attack,
which
is
you
know,
a
curator
is
trying
an
honest.
Curator
is
trying
to
signal
a
thousand
grt.
G
A
attacker
front
runs
them
by
signaling
10
000
grt,
and
then
they
immediately
burn
all
their
shares
after
the
curator
has
signaled,
and
you
can
kind
of
see
in
the
scenario
that
represents
how
the
protocol
works.
Today,
the
attacker
runs
away
with
the
profit
of
about
700
grt,
which
is
about
70
percent
of
the
the
honest
curator's
budget
right.
The
honest
curator
started
with
the
budget
of
a
thousand
when
we
introduced
the
decaying
capital
gains
tax
that
mitigates
profits
being
made
over
very
short
time
periods.
G
We
see
that
the
realized
profits
of
the
attacker
are
less
than
one
grt
again,
you
know
less
than
a
ten
about.
Excuse
me
100th
of
a
grt
right,
so
comparing
that
to
the
thousand
grt
signaled
by
the
honest
curator,
this
is
incredibly
small
profit,
probably
not
even
worth
the
gas
costs
in
most
cases.
So
essentially,
you
know
for
this
specific
scenario.
The
the
attack
is
mitigated.
G
Obviously
you
know
what's
not
shown
here.
Is
you
know
attackers?
You
would
expect
to
modify
their
behavior
in
response
to
changes
in
the
mechanism,
so
it
might
be
more
realistic
to
you
know,
create
some
additional
scenarios
where
maybe
the
attacker
tries
to
time
the
optimal
part
of
the
capital
gains
tax
decay
to
maximum
to,
like
you
know,
solve
that
optimization
problem
in
any.
In
any
case,
you
can
expect
that
it's
gonna
be
much
less
profit
than
is
available
in
in
the
protocol
today.
G
The
other
scenario
that
we
show
here
is
just
one:
we
call
it
ap
n,
you
know,
so
this
is
not
so
much
about
a
sandwich
attack,
but
just
acknowledging
that
there's,
this
overwhelming
incentive
to
signal
immediately
after
a
a
after
a
subgraph
is
deployed
and
that
the
amount
of
shares
that
curators
get
is
incredibly
sensitive
to
just
the
ordering
in
which
people
get
in
here.
We
only
show
the
experiment
with
the
initialization
phase.
G
I
don't
show
you
the
sort
of
default
case
today,
but
you
can
see
that
basically,
with
actually
I'll
show
you
the
configuration
real
quick.
So
here
we
just
show
a
bunch
of
curators
one
that
I
just
I'm
arbitrarily
labeling
attacker
they're
all
signaling,
collectively
about
100
000
grt.
G
But
what's
of
note
is
that,
irrespective
of
the
ordering
in
which
they
all
got
in
you
know
here,
I'm
showing
the
attacker
getting
in
first,
I'm
assuming
that
they
have
a
way
of
doing
that
the
share
balances
reflect
effectively
a
uniform
price.
I
think
there's
some
javascript
math
rounding
errors
in
the
prototype,
but
effectively
they're
all
getting
a
uniform
price
for
the
grt
that
they
signaled.
So
again.
For
this
particular
scenario,
the
the
sort
of
pernicious,
behavior
and
effects
that
we've
been
seeing
are
mitigated.
G
So
last
thing
I'll
just
say
as
a
quick
plug.
Is
that,
like,
if
you're
not
familiar
with
observable
hq,
it's
a
really
approachable
way
to
explore
these
types
of
concepts
and
they're
very
easily
forkable
so
like?
If
you
wanted
to
take
the
data
in
the
cell
and
put
it
into
a
chart
or
you
know
something
else,
to
kind
of
gain
better
intuition
like
please
do
so
and
share
that
with
the
community.
It's
also
really
easy
to
just
use
the
simulation
framework.
I
have
here
to
add
your
own
scenarios:
oliver.
A
Yeah
you
you're
good
we're
gonna
move
some
stuff
around,
so
you
can
close
it
out
in
the
next
five
minutes.
G
Well,
yeah,
I
think
I
could
yeah
five
minutes
is
definitely
plenty.
It
might
be
done
sooner.
So
the
last
one
out
I'll
say
so
next
steps
on
you
know
this
one
is
still
evaluating
the
gas
costs.
Arielle
has
done
some
work
in
that,
I
believe
actually
just
posted
an
update
today,
so
encourage
you
guys
to
check
and
check
that
out.
All
of
these
things,
both
both
this
prototype
and
the
last
one
introduce
new
questions
around
protocol
parameters
right,
so
we're
increasing
the
protocol
parameter
surface
area.
G
So
for
the
initialization
phase,
we
need
to
decide
what
the
optimal
phase
duration
is
as
well
as
what
the
optimal
exit
duration
is
personally
for
the
initialization
phase
variation.
I
would
advocate
something
that
not
only
mitigates
the
curator
issues
that
we've
been
seeing,
but
also
is
long
enough
to
accommodate
n
minus
one
upgrade
paths,
so
you
know
an
initialization
phase,
that's
long
enough
for
subgraph
developers
to
safely
migrate
slowly
from
their
previous
version
to
the
new
version
over
the
period
that
upgrade
might
take
place.
Rather
than
having
to
worry
about.
G
You
know
not
getting
that
signal
migrated
quickly
enough
and
losing
out
on
signaling
on
their
own
subgraph
on
the
decaying
capital
gains
tax,
for
the
prototypes
that
I
showed
one
of
the
one
of
the
parameters
that
would
be
introduced
is
just
the
the
window
over
which
capital
gains
decays,
but
some
things
that
are
in
the
design
space
as
well
would
be.
You
know
what
does
the
decay
function?
Look
like.
G
So,
in
the
prototype,
that's
linked
into
forum
posts,
I
showed
a
linear
decay
function,
but
you
can
imagine
something
that
drops
off
faster
right
like
something
that
drops
off
quadratically
or
you
know
you
know
according
to
some
other
function
and
so
that
that
might
be
something
that's
worth
considering
as
well.
Although
I
had
to
advocate
you
know
my
personal
feelings.
Is
that
having
something
that
that's
good
and
fixes
the
issue
is
better
than
having
something
that's
perfect
and
having
something?
G
That's
simple:
with
respect
to
the
solidity,
implementation
and
reasoning
about
it,
I
think
it's
also
preferable
to
have
something.
That's
you
know,
quote
unquote
perfect
if
that
was
indeed
knowable
and
then
the
last
one
I
want
to
put
on
the
table
here,
just
as
a
sort
of
juxtaposition
to
the
prior
two
proposals.
G
This
is
kind
of
the
I
put
for
this
proposal
last
week
as
a
alternative
path.
If
the
evaluation
of
either
the
gas
costs
or
the
upgrade
path
of
the
previous
two
mechanisms
ends
up
being
undesirable,
one
thing
I'll,
you
know
I'll
keep
I'll
note
that
currently
you
know
the
graph
ecosystem
is
pretty
constrained
when
it
comes
to
sub
graph.
Excuse
me
not
subgraph,
smart
contract
development,
you
know
rel,
is
you
know
the
primary
smart
contract
developer
for
the
network
also
doing
a
ton
of
work.
G
You
know
on
protocol
operations,
side
of
things,
and
you
know
a
big
focus
for
various
other
reasons
has
been.
You
know
what
is
the
migration
of
the
protocol
to
l2
look
like
which
addresses
a
number
of
it
makes
a
number
of
improvements
to
the
protocol
across
the
board.
G
You
know
for
all
actors
involved,
and
so
this
proposal
was
really
motivated
by
exploring
what
would
it
look
like
to
do
the
minimum
amount
of
new
smart
contract
development
and
upgrade
work
while
still
addressing
these
problems,
and
this
proposal
just
achieves
this
through
making
parameter
changes
as
as
opposed
to
changes
to
the
mechanisms
and
what
it
does
is
it.
You
know
you
can
kind
of
see
it
all
in
the
title:
it
flattens
the
curation
market
bonding
curve
in
the
core
protocol
and
it
steepens
the
graph
name
service
bonding
curve.
G
So
I
don't
have
time
to
go
super
deep
into
all
this,
but
what
many
folks
might
not
realize
is
that
when
you
signal
through
the
graph
explorer
and
you
out
of
migrate,
your
signal
you're,
actually
using
a
nested
bonding
curve,
where
the
shares
of
the
higher
order
bonding
curve
and
the
gns
are
actually
excuse
me,
the
reserves
of
the
higher
order,
bonding
curve
and
the
gns.
G
So
that's
what
we're
calling
this
this
curve
is
actually
shares
from
the
lower
level
bonding
curve
in
the
core
protocol
and
the
shape
of
the
combined
nested
bonding
curve
is
kind
of
a
function.
Composition
right,
where
the
shape
of
the
the
final
curve
is
determined
by
the
composition
of
those
two
functions,
and
currently
the
core
protocol
uses
a
reserve
ratio
of
one
half
which
is
not
curved
like
this.
It
actually
is
it's
like
linear
and
then
the
gns
uses
a
flat
bonding
curve.
G
A
reserve
ratio
of
one
and
the
the
combined
effect
is
a
reserved
ratio
of
one
half.
We
could
reserve
reverse
that,
so
we
could
actually
make
the
core
protocol
the
flat
bonding
curve
and
we
could
make
the
gns
the
steeper
bonding
curve.
G
You
preserve
many
of
the
same
dynamics
where
at
least
at
the
gns
level,
it's
better
to
be
in
early
than
be
in
later,
but
you
solve
many
of
the
problems
where
it's
impossible
to
front
run
at
the
core
protocol
level,
it's
impossible
to
sandwich
attack
at
the
core
protocol
level
and
you
trivially
get
n
minus
one
support,
because
there's
never
a
risk
of
a
subgraph
developer
losing
out
on
the
ability
to
migrate
signal
to
their
own
subgraph
at
a
at
a
fair
cost.
It's
not
perfect.
G
You
know,
I
think,
in
an
ideal
world
folks
would
want
to
have
some
some
steeper
curvature
at
the
core
protocol
level
as
well
to
have
the
dynamics
that
allow
you
to
make
profit
by
making
predictions
at
that
level
in
the
stack,
but
I
think
it
gets
us
a
lot
of
the
way
there
and
I
think
it
should
seriously
be
considered,
especially
you
know,
once
we've
had
a
chance
to
fully
evaluate
the
the
sort
of
upgrade
path
and
gas
costs
of
of
these
two
approaches,
so
I'll
pause
there.
G
I
think
we're
right
at
time,
but
yeah.
Please
take
a
look
at
these
in
the
forum.
Thank
you
for
those
you
know,
such
as
derek
and
orion
that
already
have
and
yeah
that's
that's.
It.
A
Thank
you.
Thank
you,
brandon.
We
are
time.
Thank
you
for
the
great
discussions
that
we've
had.
We
didn't
get
to
bubble
tea
today
and
we
are
going
to
add
them
as
our
first
agenda
item
at
our
community
talk,
which
will
be
in
two
weeks
time.
So
we
have
two
weeks
essentially
where
we
can
gather
more
feedback.
That's
that's!
That's
the
one
positive
side
of
it,
but
we're
going
to
cover
it
in
community
talk
and
we're
excited
there
to
see
what
you
have
to
say.
Thank
you.