►
From YouTube: Intro to Scalar & Vector Node
Description
Workshop hosted by Jannis, CTO at Edge & Node, April 27th 2021.
A
Welcome
everyone
and
we'll
have
today
janice
and
zach
to
go
over
the
scholar
and
vector
no
and
with
that
I'll
give
it
to.
B
B
All
right
so
yeah
today
we're
gonna
talk
about
scala
and
how
it
works
on
a
more
technical
level,
how
it
integrates
with
the
indexer
components
and
unix
infrastructure.
B
What
kind
of
changes
have
to
be
made
to
integrate
it
on
the
initial
side
now
this
is
currently
for
test
net.
Only
where
we're
stabilizing
scalar,
and
particularly
the
vector
integration
magnitude
performance,
a
improvements,
a
few
other
things
that
we
found
that
don't
work
at
the
scale
that
would
be
necessary
for
the
network,
but
we're
making
good
progress
there.
We're
testing
bursts
and
yeah.
B
So
the
purpose
of
this
is
to
introduce
all
of
you
indexers
of
those
who
are
interested
but
are
not
indexes
to
to
scala
in
more
detail
and
also
to
kind
of
prepare
you
for
you
know,
ultimately,
the
the
main
launch
of
scala
so
yeah,
first
of
all,
so
for
some
of
the
indexers
who
were
part
of
the
last
index
office
hours,
this
will
be
mostly
with
the
same
content.
B
A
B
So
yeah
I
should
be
able
to
get
through
everything
in
about
30
minutes
or
so,
and
then
we
can
yeah
talk
about
scalar,
vector,
etc.
B
For
I
don't
know,
I
don't
think
we'll
need
30
minutes
to
go
through
all
the
questions,
but
maybe
there
is
a
lot
of
questions
so
we'll
see
if
you
also
want
to
read
up
on
scalar
there's
a
good
resource
is
the
the
blog
on
the
graph.com
and
there's
a
scalar
blog
post
that
introduces
what
you
know
introduces
its
design,
its
benefits
and
yeah
talks
a
bit
about
the
the
collaboration
with
connext
who
built
the
vector
protocol
that
scala
is
built
on
yeah,
so
can
can
definitely
recommend
reading
that
we'll
go
into
a
bit
more
detail
today,
but
we'll
start
with
kind
of
similar
high
level,
but
much
more
compact
introduction
to
what
it
is.
B
So
in
the
graph
network,
there's
two
kinds
of
rewards:
a
there's
indexing
rewards
for
performing
the
the
work
of
indexing,
subgraphs
and
yeah,
and
whenever
you
close
an
allocation
as
an
indexer,
you
can
collect
rewards
for
that.
Assuming
you
can
prove
that
you
did
the
announcing
work
called
proof
of
indexing.
B
Most
of
you
probably
heard
of
that
and
the
other
type
of
rewards
are
query
fees.
That's
in
nexus
can
earn
by
serving
queries
to
a
client,
and
in
order
for
these
payments
to
to
to
work,
we
can't
really
use
you
know
ethereum
layer,
one
as
is,
can
make
create
a
transaction
for
every
single
payment.
B
Obviously,
there's
a
variety
of
different
scaling
solutions,
but
some
have
drawbacks
like
not
being
able
to
to,
for
instance,
collect
the
fees
we
know
within
a
certain
time
before
a
certain
time
period
has
elapsed,
etc,
and
so
we
started
last
year.
B
So
scala
has
built
a
lot
of
knowledge
that
we've
acquired
over
the
last
year,
a
lot
of
experience
and
that
we've
that
we've
gained
from
working
with
state
channels
and
so
like
one
takeaway
that
we
that
we
yeah
that's
that
that
we
we
realized
is
necessary
that
the
majority
of
payments
that
you
or
like
query
fees
that
you
send
between
the
client
and
indexa,
can't
really
go
through
state
channels,
because
state
channels
need
to
be
updated.
B
They
need
to
be
synchronized
between
participants,
and
so
that
would
usually
require
too
much
work
to
be
feasible
to
do
within
yeah
a
query.
That's
you
know,
especially
a
short
living
query
that
doesn't
require
a
lot
of
computational
work
and,
in
addition
you
know.
Otherwise
you
could
parallelize
some
of
that.
But
so
one
thing
we
learned
is
that
the
overhead
needs
to
be
really
really
low,
because
that,
if
you
know
single,
like
few
milliseconds
in
each
query,
add
up
pretty
quickly,
and
so
that's
that's
not
really
an
option.
B
B
You
know
messages
might
not
be
delivered
between
the
client
and
the
indexer,
like
state
channel
updates
in
particular,
and
so
these
these
two
sides
can
run
out
of
sync
and
that's
tricky
to
handle,
because
you
then
always
have
to
kind
of
re-sync.
And
if
you
do
that
with
you
know,
we
did
that
in
in
the
test
net,
we
still
had
a
solution
based
on
the
state
channels
protocol
and
that
required
hundreds,
thousands.
B
You
know
tens
of
thousands
of
messages
to
be
passed
around
all
the
time
to
recover
stages
that
had
run
out
of
sync.
So
that's
another
thing:
we
realized
we,
we
don't
want
to
yeah
use
these
day
channels
for
for
every
single
payments,
every
single
query,
fee
transaction
and
instead
we
want
to
interact
with
the
state
channels
on
a
less
frequent
basis.
B
So
scala
is
essentially
a
framework
for
microtransactions
for
query
fees
that
is
based
on
the
sectional
framework,
but
it
uses
the
the
sections
framework
more
like
a
like
a
transport
layer
for
like
yeah,
I
think
of
it
as
a
transfer
layer.
But
it's
not
really
a
transfer
layer.
It's
more
like
a
settlement
layer
where
two
parties
come
to
an
agreement
of
what
the
overall
query
fees
were
and
they
resolve
yeah.
B
You
know
they
resolve
like
accumulated
query
fees
and
yeah
and
then
pathways
again,
like
the
party
that
got
got
query
fees
or
collected
query
fees
can
then
go
and
take
them
on
chain,
and
so
the
way
scala
is
designed
is
you
have
queries
starting
at
the
top
here,
let
me
see
if
I
can
zoom
in
so
you
have
queries.
Every
query
is
associated
with
a
you
know,
a
query.
Free
transaction
and
those
are
kind
of
collected
within
receipts,
and
you
can
think
of
receipts
as
like
lanes
of
of
parallelism.
B
If
you
have
20
parallel
requests
between
a
client
and
an
indexer,
you
might
need
20
receipts
to
be
able
to,
because
you
can
only
use
one
receipt
at
a
time.
You
would
then
yeah
create
different
receipts
in
parallel
and
and
send
those
over
and
basically
every
query
is,
along
with
every
query,
there's
an
updated
receipt
sent
to
the
indexer
with
the
latest
query
fees
added
on
top
of
what
was
already
collected
within
the
receipt,
and
so
the
indexer
basically
just
has
to
check
is
you
know,
is
this
receipt
unique?
B
Does
this
relate
to
some
to
something?
I
already
know
we'll
talk
about
a
little
bit
more
about
what
context
it
needs
in
a
bit
and
yes,
the
receipt
is
generated
and
sent
by
a
gateway
or
client
and
all
the
indexer
needs
to
know.
Okay.
This
is
this
is
a
valid
receipt
and
it's
higher
than
the
amount.
I
I
received
last
time.
You
know
just
that
it's
kind
of
quiffy
tally
doesn't
go
down
over,
you
know
suddenly
or
something
so
it
just
needs
to
know.
Okay,
there's
more
coming
in
I'll.
B
Take
that
and
I'll
kind
of
cache
it
and
put
it
on
the
side.
So
that's
where
the
receipts
are
just
like:
bundling
up
query
fees
in
a
very
efficient
way,
basically
in
memory,
but
like
kind
of
cache
in
the
database
for
later
and
a
nice
thing
about
these
receipts
also
is
that
they
are
pretty
fault
tolerant.
When,
especially
when,
when
a
client
or
gateway
crashes,
it
can
just
come
up
again,
create
new
receipts
and
send
those
over
and
the
indexer
will
see.
Okay,
I
don't
know
this
yet
I'll.
B
Add
it
to
my
list.
You
know
it's
definitely
better
than
receiving
nothing
and
over
time
that
receipt
will
collect,
we'll
likely
collect
more
for
refuse,
because
now
the
the
client
is
back
up
again
and
we'll
send
more
queries
and
so
yeah.
No,
no,
no,
no,
no
trouble
for
the
index
or
two
to
also
use
that
receiving
this,
except
that
receipt
then
on
these
receipts
eventually
need
to
be
rolled
up
into
these
lower
level
state
channels
and
the
way
that
happens
is
through
an
intermediate
construction
called
the
transfer.
B
It's
a
pretty
common
construction
state
channels
that
you
have
a
state
channel
inside
the
state
channel.
You
create
yeah
different
terminology
using
different
projects.
You
create
an
app
that's
installed
inside
the
state
channel
and
that
kind
of
plays,
like
you
know,
represents
a
game
of
like
say,
transitions,
something
and
in
this
in
this
case,
these
these
apps
are
called
transfers
and
they
are
really
just
created
once
and
then
resolved
ones.
B
So
they
are
created
by
a
client
or
gateway
to
prepare
kind
of
a
body
to
put
all
these
receipts
in
or
to
associate
all
these
receipts
with,
and
the
indexer
ultimately
will
resolve
the
transfer
and
the
way
it
does,
that
is,
it
will
bundle
up
all
the
or
take
all
the
receipts
that
were
created
in
association
with
the
transfer.
So
that's
the
context
it
needs
it
needs
to
know.
B
Ultimately,
you
know,
aside
from
these
transfers,
consists
of
two
balances:
there's
a
kind
of
a
yeah
there's,
a
balance
for
one
side
of
the
channel
and
the
balance
for
the
other
side
of
the
channel
and
when
a
transfer
is
resolved,
the
amount
that's
part
of
the
transfer
or
that
the
transfer
is
resolved
for
so
basically,
the
sum
of
all
the
the
query.
Fees
in
the
receipts
gets
moved
from
one
side
to
the
other
side
in
the
channel,
and
that's
that's
what
what
happens
so
afterwards.
B
One
side
has
more:
one
side
has
less
and
and
that's
that's
it
really
and
then
there's
a
bit
more
that
the
indexer
would
do.
The
indexer
would
ultimately
take
this.
You
know
balance
that
it
has
all
these
query
fees
that
it
has
gained
that
have
been
moved
into
its
stationary
balance,
take
them
on
chain
into
a
rebate,
pool
that
is
associated
with
the
subgraph
that
it
received
these
query
fees
for
and
yeah.
B
That,
then
goes,
you
know
those
fees
are
then
distributed,
and
ultimately,
after
a
dispute
period,
the
indexer
can
claim
those
and
the
resulting
fees
that
it
itself
is
earned,
has
earned
and
yeah
can
withdraw
those
from
even
the
staking
contract.
B
One
thing
to
note
here
is
that
there
are
there,
isn't
just
a
single
channel
between
every
client
and
every
indexer.
Well,
there
will
actually
be
more
channels
than
that
really
are.
Instead,
there
is
a
router
sitting
in
between
the
let's
say,
consumer
client
or
gateway,
and
the
indexer,
and
this
this
router
long
term.
The
plan
for
that
is
to
become
a
network
of
routers
with
you
know,
an
incentivization
for
people
to
run
the
routers
and
operate
them.
B
Routers
can,
for
instance,
take
a
fee
and
as
part
of
of
their
operations
and
and
whoever
interacts
with
the
router,
can
you
know,
get
a
quote
for
the
fees
that
the
router
takes
and
can
decide
which
routers
to
use,
etc.
There's
a
lot
there
that
we're
not
going
to
cover
today
that
I'm
also
not
the
expert
on,
but
what
this
results
in
in
in
this
construction
is
that
you
have
a
single
channel
for
every
client
with
the
router
and
a
channel
with
the
between
the
router
and
and
each
indexer.
B
So
if
you
have
like
an
indexers
and
one
client,
you
have
n,
plus
one
channels
and
yeah,
so
there's
a
few
more
consequences
to
that
and
that
we'll
talk
about
in
in
a
moment
but
yeah.
So
these
transfers
exist
in
both
these
channels
and
we'll
talk
about
like
how
they
are
kept
in
sync
and
all
of
that
in
a
little
bit
yeah,
I'm
not
able
to
fully
follow
the
channel.
I
see
that
zach's
fielding
some
questions,
anything
that
we
want
to
talk
about
bring
up
later
again,
we
can
so
yeah.
B
We've
talked
about
the
the
nomenclature
a
little
bit,
you
know
queries.
Obviously
you
know
receipts.
We've
talked
about
the
transfers
and
the
the
state
channel,
and
one
thing
to
also
note
about
the
trends
first,
perhaps,
is
that
they
are
creatives
so,
like
the
receipts
are
created
against
a
transfer.
The
transfers
are
created
against
an
allocation
made
by
the
indexa,
so
there's
a
direct
link
between
the
locations
created
by
the
indexer
and
the
transfers
and
the
receipts.
B
So
that
ties
everything
together
and
associates
the
fees
with
yeah
the
indexer
allocating
some
stake
towards
the
subgraph,
that's
being
queried
cool,
so
yeah
we've
talked
about
the
participants
as
well
a
little
bit,
so
the
client
obviously
sends
queries
to
indexers.
Each
query
comes
with
an
updated
receipt
that
includes
latest
query
fees.
B
Indexer
received,
though,
receives
those
just
does
some
sanity
checking
verifies
that
you
know
the
receipt
is
valid
caches
on
the
side
resolves
so
the
way
it's
currently
implemented
in
the
unix
agent
is
that
transfers
are
resolved
at
about
the
same
time
that
an
allocation
is
closed.
So,
whenever
you
close
the
location,
there's
a
little
bit
of
a
period
like
a
like
a
gap
and
after
about,
I
think
10
minutes
is
currently
what's
what's
the
default.
B
The
transfers
corresponding
to
the
allocation
are
then
resolved
that
doesn't
take
long
and
then
once
they
are
resolved,
then
there's,
I
think,
additional
delay,
or
maybe
maybe
it's
immediate.
The
fees
collected
through
all
these
transfers
associated
with
the
allocation
gets
withdrawn
into
the
the
staking
contract
and
so
yeah.
That's
that's
similar
to
how
it
was
done
so
far
until
now,
where,
after
an
allocation
was,
was
closed
and
the
index,
I
could
then
call
collect
to
collect
query
fees
accumulated
in
the
state
channels
cool.
B
So
we've
talked
about
that
yeah
and
then
we
have
the
router,
which
is
aside
from
just
being
this
bridge
and
like
reducing
the
number
of
channels
that
you
create.
It
essentially
also
makes
the
collateralization
of
these
state
channels
and
transfers
more
efficient
and
we'll
talk
about
that.
A
little
bit
as
well.
B
That
has
a
consequence
for
these
transfers
that
we've
looked
at,
and
we
will
look
at
this
one,
this
image
here:
real,
quick,
so,
okay,
that
was
not
what
I
wanted
to
do.
B
B
If
you
imagine
a
client
or
gateway
having
a
channel
at
the
router
and
they're,
not
having
a
state
channel
with
the
indexer,
then
when
the
gateway
creates
a
transfer
to
associate
with
receipts
with,
then
it's.
It
will
do
that
in
its
channel
with
the
router
and
the
router
is
responsible
to
kind
of
copy.
That
transfer
or
create
yeah
create
a
copy
of
that
in
the
state
channel
with
the
corresponding
indexer.
So
the
transfer
is
created
with
the
index
as
a
counterparty,
but
it's
not
really
like
an
end-to-end
single
transfer.
B
And
if
you
imagine
like
a
multi-hop
story
where
there's
multiple
routers
between
yeah
that'll
be
similar
and
there's
an
event
in
vector
that
tells
you
whether
a
transfer
was
fully
set
up
and
to
end
between
the
client
or.
C
B
As
the
the
creator
and
the
the
other
side
yeah,
let's
just
talk
just
briefly
about
the
distinction
of
balances,
just
to
recap
that,
because
I
know
I'm
talking
pretty
fast
this
way.
So
let's
assume
we
have
an
initial
stationary
construction
where
we
have
a
client
router
channel
where
the
client
has
say
10
grt
and
the
router
has
zero
and
we
have
a
channel
where
the
router
has
20
grt
and
the
index
has
zero,
and
there
is
a
single
transfer
between
this
client
and
the
indexer
for
six
grt
with
two
receipts.
B
Maybe
doesn't
really
matter
at
this
level
and
the
transfer
is
this
transfer
is
resolved.
That
means
the
gateway
will
have
to
pay
six
grt
or
like
send
six
grt
to
the
router,
and
the
router
will
have
to
update
the
the
balances
in
its
channel
with
the
index
so
that
the
router
has
also
6
less
than
b4.
So
28
goes
down
to
14.
The
index
has
6.,
so
in
total
router
so
has
20.
The
gateway
has
six
lessons.
B
The
index
has
six
more
yeah,
pretty
pretty
basic,
but
the
router
or
the
network
of
routers
in
the
future
will
make
sure
that
that's
the
case
right.
So
let's
talk
a
little
bit
about
collateralization
and
the
overall
flow
of
query
fees
just
to
there's
a
zoom
button
somewhere
in
these
images.
B
Maybe
maybe
not
okay,
let's
do
this
so
just
like
this
is
the
basically
the
full
picture
of
how
everything
fits
together.
We
have
these
day
channels
in
in
green.
B
I
have
the
router
sitting
between
we're
kind
of
managing
these
two
or
like
yeah,
managing
this
one
in
particular
this
one,
this
one
as
well,
and
these
channels
need
to
be
collateralized
so
that
yeah,
when
there's
when
there's
when
they're,
when
like,
for
instance,
if
you
create
a
channel
and
you
create
transfer
in
that-
and
ultimately
you
you
want
to
update
the
balances
when
the
transfer
is
resolved
and
they
need
to
need
to
be
a
balance
and
say
on
the
client
side
for
the
router
and
client
balances
to
be
updated
in
the
client
to
have.
B
You
know
less
afterwards
in
the
router
to
have
more
same
without
an
index,
so
the
router
will
transfer.
You
know,
query
fees
to
the
to
the
indexes
balance
and
for
that
it
needs
to
have
some
balance
of
its
own
locked
up
in
the
channel,
so
that
that
can
that
can
work
and
what
that
means
is
in
order
to
keep
these
channels
alive
and
sufficiently
collateralized.
B
The
client
needs
to
provide
collateral
and
can
deposit
into
the
the
state
channel
and
in
the
similar
way,
the
router
has
to
also
provide
collateral
into
its
state
channel
yeah.
So
these
these
red
lines
are
kind
of
the
unchained
transactions
or
can
be
the
on-chain
transactions
where
there's
collateral
provided
into
the
the
state
channels.
B
Another
transaction
that
happens
on
chain
is
the
index,
ultimately
withdrawing
query
fees
from
you
know,
based
on
the
balance
it
has
and
the
amounts
of
query
fees
it
has
collected
for
different
allocations.
It
will
withdraw
the
corresponding
amounts,
that's
something
the
agent
does
automatically,
so
those
are
the
unchain
transactions
and
the
rest
is
all
off
chain,
and
so
then
we
have
these.
B
These
gray,
like
dark
gray
lines,
that's
the
transfers
and
all
the
sinking
that
happens
on
goes
on
there
between
between
these
three
parties,
so
that's
kind
of
the
base
layer
for
for
scalar,
and
then
we
have
the
kind
of
the
fast
path
in
light
gray,
where
the
client
sends
queries,
and
these
updated
receipts
to
the
indexa
and
the
names
are
just
caches
these.
B
If
they
make
sense,
that
path
is,
is
extremely
fast
and
requires
no
like
no
database
interactions
either
on
the
client
on
the
indexer
sure
the
indexer
will
store
the
receipts,
but
it
can
do
that
out
of
band,
and
even
if
you
have
multiple
indexer
services,
they
can.
You
know
do
that
in
parallel.
They
can
like
periodically
flush
and
do
that
with
rules
that
make
sure
that
they
don't
override
receipts
that
have
like
a
higher
amount
of
query
fees
in
them
already.
B
You
know,
because
if
you
have
a
load
balancer
sitting
here
with,
let's
say
10
index
or
services
and
the
same
receipt
is
used
multiple
times,
but
because
of
the
low
balance
and
not
knowing
anything
about
the
receipts.
B
The
same
receipt
with
different
values
will
go
to
different
instances
over
time,
and
so
you
want
to
make
sure
that
you
don't
accidentally
override
a
higher
value
of
query
fees
with
the
receipts
yeah
by
accident.
So
it's
a
few
things.
The
index
needs
to
make
it
take
care
of,
but
none
of
this
is
in
the
critical
path
of
queries,
and
so
this
path
is
really
really
efficient
and
really
crash.
Tolerant,
because
if
the
client
goes
away
can
create
new
receipts,
send
those
over
those
receipts
are
yeah
really
really
like
small
as
well.
C
B
In
total,
so
pretty
pretty
small
payload
to
send
along
as
well
yeah,
and
so
that's
the
construction
of
scala
in
about
20
minutes.
So
how
does
this
affect
the
indexer
infrastructure?
B
So
this
is
particularly
for
all
you
indexers
and
we've
covered
this
in
the
next
office
hours
already.
I
will
do
it
again
here
see
if
this
loads,
okay,.
B
Cool,
so
there's
only
one
new
component
on
the
indexer
side,
but
there
is
a
new
component,
which
is
a
vector
node
that
you
have
to
deploy
alongside
the
agent
and
servers
and
the
graph
nodes,
and
that
takes
care
of
all
the
interactions
with
with
the
vector
protocol.
B
You
know
managing
your
channel
managing
your
transfers,
allowing
you
to
to
look
up
the
transfers
that
are
created
and
so
on,
and
also,
I
think,
stores
history
of
the
transfers,
I'm
not
entirely
sure,
but
probably
so
that
is
a
new
component
comes
with
comes
in
the
form
of
docker
image.
You
can
probably
run
it
like
bare
metal
too,
but
I
personally
haven't
tried
that,
and
essentially
has
one
port,
eight
000,
that
both
the
index
agent
and
service
will
use
for
different
purposes
yeah.
B
So
this
component
is
new
and
this
port
is
used
for
for
a
variety
of
things.
So
the
agent
will
subscribe
to
events
for
incoming
transfers,
for
instance.
B
So,
whenever
a
client
creates
a
transfer
with
this
index
as
the
counterparty,
the
next
agent
will
receive
notification
or
event
and
can
then
forgot
what
exactly
it
does
where
for
resolutions,
for
instance,
when
transfers
are
resolved
it
will,
it
will
kind
of
keep
keep
a
list
of
all
the
transfers
that
are
associated
with
an
allocation
and
that
have
been
resolved
and
kind
of
keep
track
of
the
states
there
in
a
simple
way
to
know
when
all
the
transfers
for
an
allocation
are
done
resolving
and
when
it's
ready
to
withdraw
any
any
fees
into
the
staking
contract
agent
also
initiates
the
resolve
resolution
of
the
transfers.
B
It
also
withdraws
the
query
fees
from
the
the
state
channel
balance
after
resolving
all
the
transfers
to
the
rebate
pool
and
then
both
agent
servers
use
this
vector
node
to
look
up
transfers
for
incoming
query
fees,
and
you
can
do
that
as
well.
There's
an
api
to
to
talk
to
this
vector
node,
where
you
can
look
up
individual
transfers,
etc,
etc.
B
So,
if
you
think
about
this
two
two
channel
approach
or
this
two
channel
construction,
we
have
client
router
and
the
counterpart
you
have
these
two
transfer
copies
kind
of,
and
the
surrounding
idea
identifies
the
the
you
know
the
same
kind
of
for
the
the
two
copies
that
belong
together
in
each
of
the
different
state
channels.
So
they
each
have
like
a
unique
transfer
id
that's
different.
They
have
a
routing
id
that
they
have
in
common
and
so
for
debugging
purposes.
The
api
is
really
nice,
yeah
cool
thing.
Also.
B
You
can
run
this
vector
node
in
the
browser,
so
there's
all
kinds
of
future
possibilities.
There
cool,
there's
one
intricacy
that
I've
commented
on
in
the
index
office
hours
too,
because
this
isn't
super
obvious.
The
way
it's
it's
done,
the
index
agent.
The
way
it
subscribes
the
vector
node.
Is
it
posts
a
url
of
itself
with
a
special
port
8001
by
default
to
the
vector
node,
because
the
subscriptions
are
not
done
with
web
sockets?
B
They
are
done
over
http,
so
the
index
age
will
runs
its
own
kind
of
mini
server.
That's
internal
should
be
internal
to
your
infrastructure
and
then
the
vector
node
can
talk
to,
and
so
then
it
will
just
make
post
requests
with
those
sectional
events
transfer
events,
for
instance
to
the
next
agent
through
that
url.
So
you
have
to
have
like
this
internal
service.
The
vector
node
itself
does
not
have
to
be
exposed
directly.
B
It
will
connect
to
a
set
of
nats
services
that
are
running
currently
run
by
connext
and
those
can
handle
all
the
communication
between
all
the
different
different
nodes
in
the
network.
Currently
there
is
one
other
thing
right,
so
the
the
channel
message
is
inbox
endpoint
that
inaccess
ui
checks
for
when
you're
on
graphing
excel
status.
That's
gone.
Inexorably
needs
to
be
updated
still
too
to
yeah
get
rid
of
that
test.
It
could
do
other
tests
instead.
B
One
thing
we're
adding
to
the
indexer
service
is
a
a
way
to
look
up.
The
version
of
vector
being
used,
and
so
if
that's
correct,
this
one
will
look
it
up
from
the
vector
node.
If
that,
if
you
get
that
information,
you
know
the
vector
node
is
up
and
running
and
can
can
give
you
its
version.
So
then
that
will
replace
that
check
for
the
for
this
endpoint
yeah.
B
Really
it's
all
the
transfer
and
receipt
management
is
kind
of
tied
in
the
locations
handled
automatically
by
the
agent
and
service.
B
So
apart
from
updating
the
infrastructure
like
this
there's
nothing
to
do
really
and
so
yeah.
This
is
available
on
test
and
right
now
and
I
can
only
yeah.
I
can
only
hope
that
as
many
as
as
possible
update
to
that.
So
that's
you
know,
there's
a
lot
of
testing
that
can
be
done
and
that
we
can
test
at
the
scale
that
that
that
mainnet
requires-
and
yes.
E
B
Two
things
that,
before
we
go
into
questions
two
two
things
so
one
one
is
that
we
have
this
main
testnet
configuration
docs
page
in
the
indexer
repo
and
there
you
can
find
the
latest
releases
to
be
to
use
for
like
vector,
node,
and
you
know
graph
node,
obviously
indexer
agent
etc,
and
there
is
a
section
for
the
test
net
as
well,
of
course,
that
where
is
it
there?
B
We
go,
and
that
will
also
have
some
details
on
what
new
things
you
need
to
pass
to
the
agent
and
the
service
to
be
able
to
talk
to
the
vector
node.
There
is
a
single
router
right
now
and
in
all
of
the
participants
in
in
the
vector
protocol.
Have
this
identifier,
that's
constructed
from
the
public
key
of
of
this
participant.
So
in
your
case
that
would
be
the
I
think
it's
the
I
think
it's
the
indexa
address
it
could
be.
B
The
operator
address,
I'm
not
entirely
sure
right
now,
but
it
works.
So
this
is
the
the
router
that
you
currently
have
to
configure
so
that
it
will
use
that
router
to
create
these
channels
between
them.
Yeah
that
yeah
that'll
use
that
rocker.
As
the
counterparty
to
set
up
the
next
router
state
channel
and
yeah
same
way
that
the
service
needs
to
know
those
two
things
as
well:
the
vector
node
url
internally
in
your
cluster
and
the
the
router
id
as
well
and
then
vector
itself,
has
a
few
configuration
options.
B
I'm
not
entirely
sure
if
this
one
is
really
needed,
because
the
mnemonic
is
also
included
here.
I
think
you
yeah,
I
think,
for
now.
It's
it's
required
that
you
use
the
the
same
mnemonic
as
you
also
use
for
the
agents
for
the
operator
mnemonic
yeah.
So
that
would
also
mean
that
the
public
public
identifier
here
is
based
on
the
on
the
public
key
of
the
index
agent
and
monica
the
operator.
B
If
you
have
a
separate
one
from
the
indexer
key
and
yeah,
he
needs
to
talk
to
ethereum
needs
to
needs
to
talk
to
the
the
nats
services
as
a
few
just
to
make
sure
that
there's
enough
redundancy,
I
think
we
can
also
run
our
own,
I'm
not
sure
if
they
are
required
to
all
be
be
the
same
or
whether
it's,
I
think,
there's
some
extensions
to
to
the
nats
software
that
have
been
made
for
connext
or
vector
in
this
case,
but
it
might
be
possible
to
you
know,
run
your
own.
B
B
I
believe
they've
also
added
being
able
to
pass
like
a
file
path
in
for
this,
and
then
you
can
kind
of
mount
this
data
as
yeah,
for
instance,
like
a
docker,
secret
or
kubernetes
secret,
or
something
like
that
right.
So
that's
what's
changed
there
and
then
also
we've
just
added
a
an
overview
of
scalar.
It's
pretty
much
the
same
that
we've
just
gone
through
to
the
initial
repo
as
well,
so
that
can
be
found
here
in
excel.scala,
nd
and
points
to
the
blog
post.
B
If
you
want
to
read
that
and
then
also
covers
like
the
things
that
we've
just
talked
about,
the
query
flow
correctly
flow,
then
the
say
channels
and
like
how
the
transfers
work
just
just
some
basics
and
the
infrastructure
as
well.
The
ports
like
what
changes
are
involved
in
yeah
in
adding
adding
scalar
on
the
indexer
side.
B
One
thing
that's
worth
noting
is
vector:
node
also
stores
these
transfers
etc
in
a
postgres
database,
and
I
would
strongly
recommend
to
keep
that
separate
from
the
indexer
database
because
never
know
especially
during
testnet
right
now.
You
may
need
to
wipe
that
database
or
whatever.
So
you
definitely
want
to
keep
that
separate
right,
and
that,
I
think,
is
the
end
of
my
presentation
and
the
rest
would
be
questions.
I
think
we
can
start
with
three
questions
that
were
that
we
went
through
the
next
office
hours
too.
B
So
one
thing
is:
does
the
vector
node
need
to
be
scaled?
So
you
know
similar
to
the
nxl
service.
If
you
wanna
want
to
be
able
to
serve
more
queries,
you'll
spin
up
more
indexes
services,
the
vector
node
itself
doesn't
have.
A
lot
of
you
know,
doesn't
see
a
lot
of
traffic.
B
It's
part
of
the
the
reasons
why
scala
is
designed
the
way
it
is
with
these
layers,
with
the
receipts
being
like
the
the
you
know,
the
only
critical
path,
and
that
very
that's
interacted
with
very
frequently
with
that,
whereas
the
vector
node,
especially
creating
transfers,
you
know
a
few
times
for
allocation,
maybe
resolving
transfers
the
same
thing.
B
It
may
even
break
things,
I'm
not
sure
two
backer
nodes
running
against
the
same
database
for
the
same
you
know,
mnemonic
ethereum
account
is
probably
not
gonna
fly,
so
don't
do
that.
It
just
makes
your
life
easier,
we're
testing
on
testnet
right
now,
so
it's
live
there,
there's
currently
a
bit
of
a
performance
issue
and
that
we're
working
out
with
like
creating
transfers,
and
so
that's
that
should
be
resolved.
B
Sometime
today,
hopefully,
don't
want
to
be
too
optimistic,
but
it's
looking
good
and
yeah,
so
we'll
keep
stabilizing
there
and
hopefully
that
shouldn't
take
take
long
and
the
timeline
for
scalar
is
that
on
mainnet
is
that
we
intend
to
yeah
make
make
it
possible
to
receive
query
fees
using
scala
on
mainnet
in
about
you
know,
similar
timeline
that
we're
looking
at
for
the
migration
of
subgraphs
to
mainnet,
so
very,
very
soon,
tm.
Okay!
B
So
that's
that's
it
from
me
for
now
the
next
three
days,
I
think
we'll
want
to
test
a
bit
more
on
testnet.
So
don't
think
we'll
be
quite
ready
to
to
run
this
on
on
mainnet
in
three
days
but
yeah
shortly
after
maybe
four,
maybe
maybe
five
we'll
see
yeah
what
what
other
questions
are
there.
C
What
about
index
or
operational
question
janice?
I
did
put
it
in
the
chat
there.
It
was
pretty
banal
question.
To
be
honest,
it's
not
that
interesting,
but
we've
got
metric
endpoints
for
pretty
much
every
piece
of
the
of
the
stack
when
I
first
ran
the
sort
of
the
the
repo
implementation
of
vector
in
dev
mode.
C
It
sort
of
sets
up
a
whole
bunch
of
you
know
additional
containers
around
vector
to
let
you
monitor
things
and
various
other
bits
and
bobs
do
you
know
if,
if
vector
itself
has
a
metrics
endpoint
that
we
can
expose,
or
maybe
it's
a
bit
more
convoluted
than
that,
I
I
don't
have
the
skills
to
go
into
the,
what
the
repo
does
and
figure
out
exactly
what
it's
doing.
Is
it
using
prometheus
or
is
it
doing
like
log
parsing
or
something
like
that.
B
B
I
I
could
be
wrong.
I
don't
think
there
is
a
complete
endpoint.
Yet,
although
I
think
that
will
yeah,
I'm
hoping
that
I'll
come
and
what
we
can
do,
but
haven't
done
yet,
for
instance,
is
to
add
metrics,
for
instance,
that
the
transfers
create
a
per
allocation
etc
in
the
the
asian,
for
instance,
where
you
get
notified
of
all
that,
and
so
we
could
provide
some
insights
there.
B
The
the
logs
locked
by
vector
node
are
just
like
the
indexed
agent
in
service
logged
using
pino
or
pinot,
so
that's
json,
which
google
cloud
or
or
elasticsearch,
for
instance,
will
pass
and
like
you'll,
be
able
to
filter
by
like
the
different
fields,
pretty
pretty
easily
yeah.
It's
a
good
question
I'll
check
with
connects
there.
Chris
writes
that
it
looks
like
it
exposes
prometheus
endpoint.
C
Yeah,
it's
scraping
I've
just
taken
a
quick
look.
It's
scraping
port
8000
on
the
vector
node,
which
seems
strange
to
me
unless
port
8000
is
doing
something
different
in
their
container,
but
thanks
chris
there's
something
to
start
looking
at.
We
can
figure
out
from
there
what
it's
doing
and
maybe
work
it
out
ourselves.
C
B
It's
not
like
the
in
the
critical
path.
You
know.
If
transfers,
for
instance,
fail
to
to
resolve,
I
think
we're
retrying
or
certainly
can
we
try
in
the
agent
and
it's
not
like
it's
not
gonna
eat
a
lot
of
resources,
so
it
also
shouldn't
crash
a
lot,
for
instance,
due
to,
for
instance,
like
memory,
not
running
out
of
memory
or
anything
so
yeah,
but
so
I
think
that
situation
would
probably
be
improved.
B
B
I
think
the
router,
my
understanding
is
the
router
is
basically
a
thin
wrapper
around
a
regular
node
or
something.
So
I
think
port
8000
might
serve
a
similar
purpose
in
the
router
and
not
the
node
and
as
it
does
in
the
notes,
so
may
not
be
the
right
port
to
describe
there.
B
Yes,
and
as
it
connects
discord,
there
is
also
I
can
forward
a
link
to
those
who
are
interested
in
potentially
becoming
routers
they've
sent
me
a
link
where
you
can
basically
sign
up
to
if
you're
interested
in
knowing
more.
B
B
I
think
that
would
all
be
communication
that
happens
through
gnats.
So
nets
is
like
this:
this
high
performance,
low
overhead
alternative
to
http
messaging.
Well,
it's
probably
not
like
a
one-to-one
alternative.
B
It
does
the
subscriptions,
as
far
as
I
know
and
as
like
retry
ability,
etc,
built
in
and
all
the
communication
between
the
different
nodes
and
the
router
slash
routers
as
well
and
happens
through
nets,
and
so
that's
why,
for
instance,
I'm
not
sure
exactly
how
it
works,
but
the
vector
node
will
connect
to
nuts
and
it
doesn't
need
to
expose
a
port
publicly.
For
that
same
same
with
the
routers,
you
don't
need
like
a
direct
line
of
communication.
E
Okay,
does
that
mean
that,
like
service
discovery
is
effectively
facilitated
by
nats,
yes,
cool
thanks.
B
And
yeah
nats
was
kind
of
more
critical
in
our
first
attempt
with
connects
so
like
last
about
yeah
pretty
much
about
like
a
year
ago
and
a
month
or
so.
We
worked
with
with
connects
with
their
previous
solution,
which
was
called
what's
called
vector,
but
in
called
indra,
and
that
was
our
initial
approach.
There
was
to
basically
create
a
transfer
for
every
single
query,
feed
transaction.
I
think-
and
there
was
a
lot
more
critical,
that
like
nats
was,
you
know.
B
Well,
nets
itself
is
efficient,
but
also
that,
like
the
equivalent
of
the
vector,
node
was
efficient
and
all
the
the
transfer
creations-
and
you
know
any
any
updates-
were
super
efficient
and
that's
no
longer
the
case.
Thanks
to
like
this
new
design
of
of
scana,
we
don't
need
to
interact
with
the
underlying
state
channel
very
often.
B
How
would
a
migration
be
happening
if
it
happens
both
with
with
vector
and
without
vector
nodes
would
work
simultaneously
and
maintain
yeah?
So
until
this
lands
on
mainnet
what's
out,
there
will
still
work
and
yeah.
So
the
immigration
you
know,
there's
these
two
parts
of
the
migration
kind
of
there's
the
indexing
which
you
know
doesn't
change
at
all.
B
Whether
scala
is
involved
in
the
queries
or
not,
so
that
can
start
right
away
and
then
for
query
fees,
yeah,
the
the
thinking
there
is
that
projects
probably
won't
start.
You
know
testing
the
subgraphs
on
on
mainnet
without
scalar
being
deployed,
so
I
think
I
think
that
migration
will
will
test
scala,
make
sure
it
works
and
and
then
deploy
to
mainnet.
Have
it
ready
for
the
traffic
to.
B
B
Any
comments
regarding
recent
unofficial
subgraph
deployments,
not
really
so
from
my
perspective.
It's
it's
a
decentralized
network,
so
grass
unofficial,
if
they
are
you
know,
might
be
unofficial
if
they
are
not
created
by
the
original
project.
But
it's
it's
not
that's
something!
That's
locked
down
in
any
way.
B
When
indexer
feels
comfortable,
does
this
mean
this
migration
to
scala
will
be
optional
like
we
were
running
in
a
hybrid
situation
sometime,
I
wouldn't
say
so.
So
it's
basically
up
to
the
client
to
decide
which
indexers
are
considered
compatible.
You
know
which
indexes
is
it'll
query
and
yeah.
B
Basically,
once
scala
is
ready
and
like
people
have
first
indexes
have
migrated
to
scala
on
mainnet,
that's
when
that'll
become
a
requirement
and
yeah.
So
there
will
be
no,
no
like
two
solutions
running
in
parallel.
I
think
that
would
be
too
complicated.
B
B
All
right,
yeah
one
note
on
the
when
indexers
feel
comfortable.
You
know
with
this
new
setup,
that's
why
we
have
the
test
net,
so
I
encourage
you
to
to
upgrade
there
if
you
haven't
make
sure
that's
that
things
are
working
we'll
test
and
we'll
work
with
everyone
to
make
sure
that
the
setup
works
and
yeah.
C
B
It
doesn't
seem
like
there
are
any
more
questions
about
scala
vector,
vector
integration,
et
cetera,
so
maybe
we
can.
We
can
take
the
last
eight
minutes
back,
wrap
up
early.
C
Maybe
I
can
ask
one
more
question:
do
you
have
anything
scheduled
right
now
in
terms
of
load
testing
on
testnet?
C
B
To
roll
out
that
upgrade
well,
I've
just
rolled
an
upgrade
and
there's
there's
one
issue
with
it.
Goal
is
to
have
that
working
within
the
next
next
few
hours
again
and
there
may
be
an
update
necessary
on
the
indexer
side,
so
new
commit
or
new
vector,
pre-release
to
upgrade
to,
and
then
we'll
do
stress
system
once
once
that's
up
again
great
thanks.
That
was
all
internal
url.
Are
those
http
style,
internal
url?
D
In
the
indexer
agent
and
indexer
service
it
you
have
to
specify
vector,
node
and
vector
event
service,
and
it
says
internal
url
in
the
infrastructure.
B
Yes,
yes,
so,
like
I
said,
the
vector
node
doesn't
have
to
be
shouldn't,
be
exposed
to
the
public
because
it
doesn't
need
to
be.
You
know,
just
like
the
was
it
like
graph,
node,
api
endpoints,
for
instance.
The
only
thing
you
should
post
is
unix
or
service,
and
so
this
is
the
internal
url
like
internal
to
the
indexers
environment.
So
if
you
run
the
kubernetes
plus,
that
would
be
compatible
with
that
not
exposed
if
you're
on
bare
metals,
not
like
a
public
port.
B
Yeah,
that's
that's
true.
No,
these
are
actually
I
can.
I
can
add
that,
and
these
are
http
rs,
because
the
vector
node,
basically,
what
it
provides,
is
a
it's,
not
a
json,
rpc
endpoint,
but
it's
the
rest
style
endpoints.
B
A
D
B
Sounds
good
to
me.
Let's
say
last
minute
questions
if,
if
not
or
if
there
are
any
and
we're
wrapping
up
early,
then
please
head
over
to
discord
and
we'll
be
happy
to
answer
anything.