►
From YouTube: Data Platform Community Meeting (April '22)
Description
This week we discussed the history of indexers on NEAR and the newly released NEAR Lake project.
Join us next time:
👉 http://near.ai/data-platform-meetings 👈
NEAR Lake Repo:
https://github.com/near/near-lake
NEAR Lake Framework Repo:
https://github.com/near/near-lake-framework
Stack Overflow Discussion:
https://stackoverflow.com/questions/tagged/nearprotocol+indexer
NEAR Indexer for Explorer:
https://github.com/near/near-indexer-for-explorer#shared-public-access
A
Cool
well
yeah:
let's
get
started,
hey
I'm
josh
devrel
at
near
protocol.
We
are
here
doing
our
last
community
meeting
of
the
month,
but
not
the
least
we're
really
excited
about
near
indexers
and
the
data
platform
team.
That's
going
to
be
kind
of
doing
a
presentation
for
us
and
yeah
answering
any
questions
that
anybody
has
on
all
things:
data
when
it
comes
to
near
protocol,
so
yeah
go
ahead
and
do
an
introduction
I'll
start
out
with
tiffany.
B
Yep,
hey
guys,
I'm
tiffany,
I
am
the
product
manager
for
data
platform
team
at
near
protocol
and
I'm
very
excited
to
be
here.
C
Cool
yeah,
hey
everybody,
and
today
I'm
as
they
I'm
with
with
you
guys
in
the
role
of
data
platform
team
leader,
you
may
have
seen
me
a
few
weeks
ago
on
explorer
team
as
well.
So
as
you
can
see,
I'm
jumping
around
trying
to
be
everywhere
and
josh
is
definitely
even
more
than
that.
I
am
so
happy
to
see
everybody
here
and
hope
you
enjoyed
this
call
I'll
pass
it
to
boyden.
D
Hey
everyone
yeah,
I'm
bahdan,
I'm
working
a
data
player
from
team,
mostly
working
on
indexers
so
like
and
docs,
and
everything
is
related.
So
I'm
passing
to
olga
we
will
like
introduce
ourselves
more
during
the
presentations.
I
have.
A
Awesome
go:
go
team
yeah,
just
if
you
haven't
attended
one
of
these
sessions
before
so
at
the
very
bottom
of
your
screen.
You
can
see
that
there's
q,
a
section
there's
also
the
chat,
so
feel
free
to
post
anything
in
chat.
A
If
you
have
like
a
specific
question
fire
off
a
question
at
any
time
during
the
presentation
and
yeah,
we
wanted
this
to
be
more
of
an
interactive
session
with
getting
community
feedback,
and
that's
ultimately,
why
we
are
having
these
is
to
get
feedback
from
our
community
help
us
shape
how
we
build
things.
A
What
we
prioritize,
what's
most
important
to
you
and
help
clarify
any
questions
that
you
have
about
any
of
the
tools
or
features
on
the
platform,
so
yeah
at
any
time
go
ahead,
fill
out
a
question
in
the
bottom
and
we
will
answer
it
as
we
go
and
with
that
I'll
pass
it
off
to
tiffany.
To
start
the
presentation.
B
Sounds
good
yeah,
so
we'll
kick
off
with
a
team
objective
for
for
the
data
platform
team
and
we
have
top
three
objectives
here.
First,
one
is
we
want
to
make
near
the
most
accessible
blockchain
for
webster
developer,
we're
very
aligned
with
the
tool
team
on
this
front,
because
you
know
we
interact
with
developers
our
out
of
our
products
interact
with
developers
they
use
it
on
on.
B
You
know
a
very
frequent
basis,
so
we
definitely
want
to
make
that
experience
as
much
as
easy
as
possible,
for
you
know
anyone
to
abort
to
use
it.
The
second
one
is
enable
product
based
decision
making
for
data
platform
products,
and
this
really
speaks
to.
B
We
really
want
to
get
as
much
of
the
feedbacks
and
engagement
with
our
community
as
possible
to
really
build
the
products
that
our
community
love
and
like
and
needs,
and
the
last
one
is
power
on
your
project
with
stable,
accurate
and
extensible
data
infrastructure,
so
really
want
to
maintain
our
quality
for
our
products
and
really
deliver
the
you
know
the
best
accurate,
the
ones
that
are
going
to
be
used
for
all
kinds
of
cases
and
for
our
project,
and
you
know
no
matter
where
you
are
no
matter
what
you
do.
D
Yeah,
that's
right.
Yeah,
meanwhile,
like
a
little
introduction,
like
I'm
here,
to
tell
like
a
cool
story
about
like
the
entire
history
of
indexer,
what
it
is
how
it
like
was
actually
invented,
I
would
say
yeah,
but
we
can't
see
the
screen.
Unfortunately,.
B
D
A
B
A
Yep,
oh
yeah,
the
the
button
to
the
right
slideshow,
oh
yeah,.
D
Of
indexer,
like
it
has
begun
in
year
2000,
I
call
this
year
as
a
near
stone
age,
because
at
this
moment
near
has
only
json,
rpc
and
everything
like
and
nothing
else.
At
this
point,
the
project
explorer
like,
I
hope
everyone
is
like
seeing
and
using
it.
D
The
explorer
was
the
most
like
the
only
consumer
of
near
data
at
that
moment,
which
like
showed
everything
to
people,
and,
at
that
moment,
for
all
as
like,
an
inventor
of
explorer
created
a
pool
model,
so-called
pull
model
so
explorer
was
like
asking
jsonrpc
every
second
for
a
new
block
and
trying
to
save
this
into
the
like
internal
database.
For
further
like
show
up
in
the
explorer
pages
it
was
like,
like
it
was
working
solution,
but
not
the
best
like
the
next
step
in
the
history.
D
It's
like
indus
industrial
age,
it's
year,
2000
still,
but
I
have
joined
near
at
this
moment
and
for
all
like
with
all
the
tricky
and
like
sneaky
way.
He
actually
like.
Can
he
propose
me
to
do
some
rust,
and
at
that
moment
we
have
started
to
create
indexer
framework.
D
So,
like
I
will
tell
you
what
is
indexer
framework
a
little
bit
later,
but
the
main
point
I
wanted
to
emphasize
here
is
that
after
we
have
like
built
indexer,
it
was
like
used
by
indexer
for
wallet
which
is
deprecated
currently,
and
we
were
like
entire
2000.
Like
year
2000
we
were
building
an
indexer
for
explorer
because
it
was,
and
it
is
the
biggest
indexer
out
there
at
least
for
now
so
a
little
bit.
What
is
index
or
framework
like
on
the
left?
D
You
can
see
the
like
a
representation
of
near
core
node,
so
it
is
totally
closed
in
itself
like
environment,
which
doesn't
like
to
interact
within
like
outside
world,
with
any
other
opportunities
like
and
possibilities
except
json
rpc,
but
we
have
managed
to
like
convince
it
to
like
give
the
data
out
of
it
and
like
this
was
an
indexer
framework.
We
integrated
there
in
the
near
core
node
to
make
it
possible
to
read
and
watch
for
something
from
from
there.
D
So,
like
next
step
in
the
history
like
it's
a
future,
it's
a
late
2021..
We
like
we
ate
enough
with
indexer
framework.
We
saw
like
it's
like,
like.
Oh
sorry,
I
have
missed
the
slide
that
I
I'm
thinking
like
something
is
weird.
D
So
during
2020
first
year
we
have
used
indexer
and
running
it,
and
we
like
we
found
out
that
it
brings
a
lot
of
suffer
for
anyone
who
is
using
it,
not
because
it
is
like
a
bad
software
by
itself,
but
like
it,
because
it
is
a
complex
software
and
like
when
you
need
indexer.
D
It
doesn't
mean
that
you
need
to
become
a
node
operator
and
it's
something
you
have
desired
before
running,
but
you
have
no
other
options,
so
the
software
it
brought
like
it
was
very
resource
consuming
and
very
expensive,
like
the
hell
of
sinking
nodes,
everyone
who
was
running
like
who
who
is
currently
running
those
know
that,
like
when
you
start
the
indexer-
and
you
see
this
like
downloading
headers
and
this
percentage
is
like
endless,
the
process
is
endless.
D
I
remember
we
had
a
call
with
george,
like
in
the
2020
I
guess
and
josh
asked
like.
Can
you
show
how
to
create
an
indexer?
I
said
yes,
of
course,
but
like
we
won't
be
able
to
start
it
because,
like
waiting
for
note
to
be
fully
synced
and
working,
it's
like
a
madness
and
like
another
step
like
another
software
is
like.
I
I'm
calling
it
like
an
update
maintenance
hell,
because
every
time
your
core
is
releasing
a
new
version,
you
need
to
upgrade
your
indexer.
D
Otherwise
it
will
stack.
You
will
need
to
update
it
anyway
and
go
through
the
node
thinking
process
like
again
and
maybe
again
and
a
lot
of
times.
So
in
the
late
2021,
we
decided
that
we
need
to
change
it.
Somehow
we
have
come
up
with
a
brand
new
name.
We
decided
to
call
it
near
lake
and
we
put
the
goals
for
us
like
we
wanted
to
create
something
that
is
like
simple
to
use.
D
It
requires
the
minimum
resources
it
almost
require
like
it
requires
almost
no
none
or
minimum
maintenance,
and
of
course,
we
tried
to
come
up
with
the
idea
how
to
avoid
the
sinking
hell
and
we
wanted
to
make
it
like
a
fast
start,
like
quick
start
like
just
like
starting
and
you're
on
the
go.
So
we
have
created
this
in
the
like
early
2022.,
we
have
created
a
lake
ecosystem.
We
come
up
with
the
like.
We
came
up
with
this,
so
lick.
D
Echo
system
consists
of
two
pieces
like
it
is
leaky
indexer,
which
is
still
an
indexer
framework
stuff
but
like
like
the
end
user,
doesn't
care
about
it,
we're
running
late
indexers
by
our
own,
we're
saving
all
the
data
about
every
block
to
the
aws
and
late,
and
we
have
created
a
lake
framework
which
is
like
for
the
end
user,
you're
using
analytic
framework
and
you're
getting
the
streams
from
aws
with
like
with
a
minimum
boilerplate.
D
So
like
what
stuff
we
have
achieved
with
the
lake
ecosystem,
like
it
costs
like
somehow
approximately
18
bucks
per
month.
It
uses
145
megabytes
of
ram
and
it
is
not
optimized
yet.
So
it's
not
the
like,
not
the
minimum,
and
it
is
always
sync
it
which
is
like
the
most
like
the
biggest
achievement.
We've
got
and
if
I'm
not
mistaken,
I
had
some
yeah.
I
have
some
example
of
the
code
yeah.
I
hope
it
is.
D
You
can
see
it,
so
it
is
like
oh
yeah,
this
one
yeah,
so
it
is
a
rust
example.
Using
lake
near
lake
framework,
rust
version.
A
D
Have
marked
the
boilerplate
part
you
can
see
like
right
here,
and
here
is
the
logic
you're
putting
it
like.
It's
your
logic
of
what
to
do
with
the
streamer
message,
and
we
have
like
a
bonus,
a
small
example
of
a
new
lake
framework,
javascript
version,
like
maybe
it's
probably
to
say,
typescript
version,
but
nevertheless
the
same
stuff.
I
have
marked
only
the
user
logic
here
so
like
as
you
can
see.
It
is
like
minimum
code
and
like
all
this
code,
I
have
like
brought
as
an
example.
D
It
is
a
working
solutions
like
completely
so
yeah
and
recently
we
have
decided
that
it
is
time
and
we
have
created
a
dedicated
website
with
the
docs
about
indexers,
like
everything
indexer
related
some
information
about
near
core
about
your
data
about
indexers,
so
we're
going
to
fill
it
up
with
a
lot
of
tutorials.
We
have
already
created,
like
a
immediate,
like
I
don't
know,
explanation
of
how
the
data
is
flowing
in
your
protocol.
We
have
created,
like
some
articles
with
explanations
and
stuff
and
like
more
to
come.
D
So
please,
like
visit
our
new
indexers
like
share
your
thoughts
and
stuff,
and
I
guess
it's
all
from
me
so
I'm
passing
it
to,
and
I
will
be
your
like
host
for
like
next
slide,
please
and
stuff,
I'm
not
sure,
like
what
is
the
best
time
to
answer
some
questions
like
immediate.
A
Or
yeah
we
can
we
can.
We
can
definitely
answer
questions
as
as
they
come.
I
don't
see
any
that
have
come
in
so
far
just
I
hate
one
from
mario
saying
congrats
on
a
great
project.
Lake
is
very
needed
addition
to
near,
but
yeah
yeah.
Please
feel
free
to
continue
to
answer
those
questions
or
I'm
sorry
ask
questions.
Oh.
D
From
benji,
how
does
the
speed
of
the
lake
indexer
compare
with
running
your
own
inductor
node?
If
I
was
running
and
node
using
the
index
of
framework
where
sorry
like
framework?
Okay,
I
got
the
question
so
like
yeah.
There
is
some
like,
I
would
say,
between
indexer
framework
and
lake,
but
it
is
not
noticeable.
D
It
is
like
just
a
matter
of
milliseconds
and
that's
all
yeah
and
I
have
a
question
from
mario.
Why
s3
it's
a
tough
question.
Actually
like
we,
we
went
for
the
first.
We
went
with
the
how
it's
called
kafka
with
the
kafka
and
we
tried
it,
but
we
actually,
our
goal
was
to
save
history
somewhere
and
like
the
and
goal
of
the
lake
ecosystem
is
not
to
provide
completely
decentralized
the
like
solution.
D
It
is
like
for
complete
decentralized,
you
can
use
indexer
framework
and
nothing.
We
can
do
to
speed
it
up
and
we
wanted
to
empower
like
web
2
developers
or
newcomers
or
someone
with
like
not
a
big
budget
with
something
that
will
allow
them
to
get
the
data
from
them
like
from
the
blockchain
as
fast
as
possible,
even
a
historical
one.
So
that's
why
we
end
up
with
the
storage
like
s3,
so
I
I
hope
this
answers.
The
question.
A
Yeah
one
more
one
more
thing:
I'd
like
to
just
you
know
before
we
move
on
to
the
next
topic:
yeah
frol
mentioned
it
in
the
comments,
but
also
yeah
I'd
like
to
also
highlight
that
yeah.
Before
this
we
needed,
you
know
to
run
a
node
or
an
indexer.
You'd
need
an
eight
cpu
one
terabyte
solid-state
drive,
you
know
and
you
would
yeah
it
would
take
like
days
or
weeks
to
sync
up
in
the
beginning,
and
then
the
rough
estimate
was
like
what
500
a
month.
A
You
probably
want
to
run
two
of
them
for
redundancy
issues
and,
as
you
know,
bogdan
talked
about
earlier
yeah.
As
releases
are
coming
out,
indexers
would
stall
you'd
have
to
update
it.
You
lose
a
little
bit
of
time.
It
was
just
yeah,
not
it
was
not
an
optimal
experience
and
now
going
to
something
like
this.
That
costs
very
little
in
comparison,
the
boy,
the
code
compared
to
what
we
were
doing
before
is
so
small,
and
it's
awesome
that
now
we
have
a
javascript
the
little
implementation
there.
D
And
just
to
justify,
like
the
boilerplate
plate
code
is
almost
the
same.
We
just
use
another
package,
like
your
logic,
can
be
as
big
as
you
want
and,
as
you
can
like
optimize
it,
but
the
way
it
gets.
The
data
like
olga
is
currently
working
on
the
solution
and
she
actually
like.
I.
D
I
believe
she
is
the
happiest
person
because
she
is
building
on
lake
already
and
she
like
doesn't
participate
in
this
sinking
hell
and
like
the
most
annoying
thing
that
you
can't
develop
indexer
properly,
with
a
real
data
from
testnet
or
mainnet
on
your
local
machine
anymore,
like
the
data
is
so
huge
and,
like
your
machines,
are
so
like
weak
that
they
can't
even
sing-
and
this
is
a
problem
so
and
currently
like.
If
you
need
to
stop
it,
just
stop
and
go
back
to
that
block
like
immediately
without
any
delays.
E
That's
true,
I
I
mean
our
previous
version
of
indexer
had
this
downloading
headers
line,
you
see
it
and
you
just
want
to
die,
and
now
you
can.
You
can
test
anything
on
mainnet
on
your
local
machine
and
it's
not
a
problem
at
all.
You.
You
can
just
receive
a
box
from
the
point
of
time
that
you
want
and
it
just
works.
D
A
D
A
D
It's
a
very
interesting
question
because
it
depends
on
what
kind
of
data
you're
expecting
because
like
if
we're
talking
about
how
like
in
comparison
with
indexer
framework,
like
how
fast
you
got
the
data
in
the
lake
like
it's
about
like
milliseconds,
I
would
say,
like
the
the
most
long
delay
you
can
get
is
the
two
seconds,
because
we
make
some
pauses.
D
So
if
we
don't
get
like
a
new
block,
but
if
we're
talking
about
like
real
condition,
then
we
have
a
lot
of
stuff
happening
on
near
core,
like
transaction
is
sent
to.
Rpc
rpc
is
like
routing
this
transaction
to
the
node,
where
it
belong
and
like
where
the
account
belongs
to
and
we're
starting
to,
execute
it
there
and
and
so
on
and
so
forth,
and
for
all
remind
me,
what's
the
numbers
you
you've
got.
C
So
let
me
sum
it
up.
Basically,
there
is
a
delight
before
you
get
the
transaction
into
the
blockchain,
but
the
question
was
about
once
the
block
is
minted:
how
long
will
it
take
to
arrive
to
indexer
framework
or
lake
framework,
so
the
time
is
actually
a
finalization
stay
is
like
the
the
longest
time
you
wait
is
for
block
to
be
finalized.
C
It
takes
three
blocks
to
finalize
and
currently
main
net
runs
roughly
1.3
seconds
per
per
block
production.
C
So
this
means
in
four
seconds
you
will
receive
it
on
indexer
framework
side
and
then
we
we
take
like
100
milliseconds
to
to
to
go
back
and
forth
through
the
network
to
put
it
on
the
s3
download
it
on
the
framework
side.
So
in
comparison
four
seconds
for
a
delay
of
like
finalization
stage
and
then
just
roughly
100
milliseconds
to
get
it
on
the
lake
side.
A
A
We
think
we're
good
on
questions
again.
Oh
you're
broken.
Do
you
have
something
to
follow
up
with
that.
A
Oh
one
last
follow-up,
so
the
difference
is
only
100
milliseconds
between
lake
and
running
index
of
framework.
I
would
say
this:
this
difference.
C
It's
mostly
like
network
latency
that
we
count
here.
E
So
we
discussed
this
nice
near
lake,
and
now
I
want
to
discuss
with
you
next
future
steps
and
in
order
to
start
at
first,
we
need
to
discuss
who
actually
needs
this
indexer
data,
because
we
want
to
to
serve
it
in
a
structured
way
and
be
able
to
to
query
that
so
at
first,
our
first
like
a
client,
is
a
near
explorer
and,
as
you
all
know,
it's
possible
to
to
search
there
for
any
historical
data
check
the
state
of
the
block,
the
account
any
transaction
receipt.
Anything
you
want.
E
E
We
also
have
some
third-party
tools.
Other
explorers
other
wallets
many
different
startups
that
just
want
to
to
serve
the
data,
to
communicate
with
data
to
collect
the
data
about
their,
for
example,
smart
contract.
E
E
It
solves
many
different
tasks
from
our
side
at
first
relational
model
of
phosphorus
suits
best
for
us,
because
our
data
is
highly
structured
and
it's
really
comfortable
for
us
to
to
use
all
these
relations
tables.
So
we
use
the
primary
keys
for
many
different
foreign
keys,
unique
indexes.
E
E
I
need
to
say
that
transactional
queries
run
pretty
fast,
even
now,
whenever,
when
we
have
five,
I
guess
in
in
our
biggest
table,
we
have
500
million
rows
now
and
we
still
can
join
it
query.
I
don't
know
order
by
filter
and
it's
really
run
pretty
fast,
even
in
in
a
tables
of
such
size.
E
E
E
Maybe
the
main
one
for
us
right
now
is
the
speed
of
insert
statements.
It's
so
limited
and
we
are
really
close
to
its
limit.
I
know
how
to
how
to
boost
it
a
little
bit
more
and
we
can
live
with
with
this
boost
for
a
couple
of
months
more,
but
it's
not
about
living
for
two
or
three
months
more.
We
want
to
have
a
sustainable
solution
for
next
few
years
and
that's
why
it
makes
no
sense
to
try
to
boost
it
right
now.
E
I
have
a
story
here
when
I
just
joined
near
it,
so
it
was
more
than
a
year
ago.
It
was
a
brilliant
time
to
to
join
here
as
a
data
engineer,
because
the
biggest
table
was
maybe
two
million
of
rows
and
it
was
possible
to
run
any
query
that
you
want.
So
it
was
a
really
funny
time
for
me,
because
I
I
created
absolutely
crazy,
select
statements.
E
I
knew
everything
about
the
blockchain
and
I
could
run
anything
and
then
our
data
started
to
to
grow
in
an
exponential
way.
I
guess
so.
It
was
a
disaster
and
and
after
that
I
needed
to
think
how
to
speed
up
the
queries,
and
so
I
I
need
to
say
right
now
that
it's
just
impossible
to
run
queries
that
I
run
at
that
point
of
time
so,
but
we
still
need
to
to
to
draw
the
charts,
as
we
have,
for
example,
at
a
statistic
page
in
near
explorer.
E
E
So
we
need
to
to
find
this
workaround.
Another
disadvantage
is
that
we
can't
truly
give
the
access
to
the
third-party
tools.
We
also
we
we
are
giving
the
access,
but
it's
not
super
powerful
machine
and
the
connections
are
limited,
and
so,
if
you
are
a
startup
and
you
want
to
compute
complex
queries-
and
you
are
ready
to
pay
for
that
sorry,
but
we
don't
provide
any
solution
right
now
for
you,
so
it's
just
it's
not
possible
to
to
run
complex
query
if
you're
a
third-party
tool.
E
E
So
for
for
these
lines
with
the
small
stars,
I
wanted
to
provide
the
links,
so
our
analytics
calculated
at
the
second
link
and
the
first
link
is
contains
shared
public
access.
So
it
has
credentials
if
you
want
to
to
connect
to
our
database
and
query
some
simple
queries.
E
Please
don't
abuse
it
second
slide.
Next
slide,
please
yeah!
So,
as
you
understood,
we
need
to
go
further
and
find
another
options,
and
since
it's
not
an
option
to
burn
everything,
so
it
actually
should
solve
our
problems,
but
it
may
produce
a
new
ones.
E
Next
slide,
please,
so
we
need
to
go
further
and
find
other
solutions
with
try.
In
other
databases,
maybe
other
relational
databases
could
suit
us
better.
Maybe
we
need
to
look
at
columnar
databases,
sharded
databases.
We
actually
started
to
look
at
them.
We
also
are
thinking
about
map
review
solutions.
The
next
slide,
please.
E
What
do
we
need
to
look
at
at?
First?
Maybe
you
know
any
good
solutions
that
you
can
suggest
us,
because
we
are
in
a
in
a
stage
of
for
research
and
we're
trying
anything.
A
Awesome-
and
we
have-
we
just
presented
a
poll
to
the
audience.
That's
currently
in
the
call.
Do
we
have
something
for
those
that
are
going
to
be
watching
this
recording
later
so
that
way
they
can
participate
in
this
poll.
Is
there
like
a
page
that
they
can
go
to.
B
Yep
I'll
send
in
the
typeform
also
for
to
share
it
later.
A
E
Cool,
so
this
is
all
from
my
side,
so
I'm
really
interested
in
the
results
of
this
poll
and
I
can
pass
the
presentation
to
I'm
not
sure.
B
B
You
know
we
already
have
the
rust
version
of
the
near
lake
framework
and
we'll
continue
to
work
on
improving
that
and
also
deliver
the
jazz
version,
which
I
believe
also
is
shared
initially
with
the
mvp
and
we'll
also
be
working
on
that,
as
well
as
what
olga's
been
just
presenting
on
the
warehouse
db
and
analytics.
You
know
database,
that's
fulfilled
that
needs
for
our
developers
and
community
and
the
second
is
we
want
to
create
updated
content,
documentation,
examples
tutorials,
you
know
you've
seen
our
team,
our
you
know.
We
have
really
fun.
B
People
who
really
want
to
you
know
benefit
our
community
with.
You
know
to
answer
questions
to
present
really
clear,
simple,
easy
documentation
and
explanations
on
how
to
use
our
tools.
So
we
will
be
working
on
improving
that
documentation,
link
that
was
presented
earlier
and
the
third
one
is
provide
a
stable
and
extensible
data
infrastructure
to
support
for
pakoda
products.
B
So
we
as
pagoda
will
have
a
lot
of
exciting
products
coming
along,
and
you
know,
as
the
data
infrastructure
platform
team
will
also
be
supporting
all
of
those
exciting
initiatives
going
forward
next
slide,
please
and,
secondly,
enable
product-based
decision
making.
So
we'll
be
posting.
You
know
community
sessions
with
with
all
you
guys
and
really
value
this
opportunity
to
have
this
direct
feedback
and
another
one
is
we
want
to
improve
our
project
management
tooling?
B
So
we
can,
you
know,
share
with
you
more
and
even
you
know
further
on
what
we
have
been
doing
and
receive
more.
You
know
real-time
feedback
there
and
we
want
to
create
a
council
of
users
for
research
and
feedbacks
to
you
know
to
validate
our
our
assumptions
to
to
you
guys
to
experience
the
beta
and
you
know,
get
all
the
feedbacks
from
there
and
really
improve
our
products.
And
next
one
please
and
the
last
one.
B
Is
it's
all
about
really
maintaining
and
improving
what
we
have
and
make
sure
you
know
it
doesn't
break
and
you
know
make
sure
uptime
is
up,
and
you
know,
and
another
one
is
identify
gaps
of
data
needs.
B
That
has
been
happening,
so
we
might
be
missing
something
and
we
really
want
to
know
what
it
is
and
we
want
to
design
a
solution
with
the
brilliant
minds
here
together
and
to
improve
your
experience
and
fulfill
all
the
data
needs
that
you
have
and
that's
it
for
the
room
maps
and
I
think
if
we
go
next.
B
I
think
there
is
a
slide
on
on
our
linking
to
our
stack
overflow
page,
so
just
to
mention
that
also,
as
our
data
platform
team
really
hope,
to
use
stag
overflow
as
our
center
place,
for
you
know,
questions
and
answers
because
we
see
a
lot
of
them
might
be
very
you
know,
like
multiple
people
will
probably
have
share
some
similar
patterns
of
questions,
and
we
really
want
to
answer
them,
and
you
know
also
ultimately
have
this
self-serve
question
and
search,
and
you
can
get
it
real
time
really
fast,
just
searching
by
yourself.
B
We'll
always
we
always
be
happy
to
answer
questions.
So
you
know
we
have
the
discord
channels,
our
amazing
deferral
team,
helping
us
with
that
as
well,
but
yeah
so
feel
free
to
submit
any
questions
to
the
stack
overflow
page.
We
have,
and
we
are
really
looking
forward
to
answer
them.
A
Cool,
I
believe
this
is
the
end
of
the
presentation.
Now
we
can
open
it
up
for
any
kind
of
cool
questions
that
anybody
has
so
yeah
again,
please
feel
free
to
drop
any
questions
in
the
bottom.
I
see
some
rolling
in
also.
I
think
someone
asked
a
question
earlier.
Arvin
asked
something
about
a
schema
being
published
somewhere.
C
Yeah
boyd
already
provided
the
link,
but
I
wanted
to
take.
It
live
as
well
the
link
that
olga
shared
during
her
presentation,
pointing
to
github
page
near
indexer
for
explorer
in
the
readme
file.
You
can
find
the
public
access
credentials
and
at
the
bottom
of
the
readme,
you
can
see
the
schema
representation,
but
they
also
can
use
any
db
viewer
for
postgres,
and
it
will
also
provide
you
with
this
nice
relations
because
we
have
foreign
keys
and
setup
in
place.
C
So
it
should
be
more
or
less
intuitive
for
you,
but
at
the
same
time,
our
representation
is
a
bit
more
aligned
and,
like
layout,
makes
a
little
bit
more
sense
and
the
default
ski
like
a
layout
that
your
db
viewer
would
present
so
once
again
near
indexer
for
explorer.
Let's
see
how
repo
you
want
to
find
to
see
that
schema
cool.
D
Like
about
the
comparison
of
the
like,
I
don't
know
of
the
speed
of
javascript
version,
with
the
rust
version
of
your
like
framework,
like
it's
a
bit
early,
to
say
about
like
a
speed,
but
I
guess
it's
comparable,
like
any
other,
the
same
tooling
like
in
written
in
javascript
and
in
rust.
Like
I,
I
don't
know
how
to
properly
answer
it
we
haven't
measured,
I'm
not
like.
D
I
don't
think
there
would
be
any
delays
from
the
data
getting
part
of
the
work,
but
it
might
introduce
like
additional
overhead
on
the
data
like
processing.
On
your
logic,
like
I
guess,
like
any
rust
logic,
you
can
write
to
like
process
and
index,
the
data
will
be
faster
in
comparison
with
the
javascript
version.
C
Just
to
add
here
so
gs
version
is:
is
not
the
performance
sensitive
part
of
the
story,
it's
more
about
consistency
and
reliability.
Stability
of
this
whole
implementation
and
from
our
experience
rust
tools
once
they
are
written
and
deployed,
they
never
cause
any
issues
on
our
side,
whereas.
D
But
but
a
lot
of
people
prefer
javascript
like
why
not
so
that's
why
we
have
created
it
and
we
have
another
question
in
this
regard.
I
guess
like
about
planned
release
date.
I
can't
say
we
have
a
planned
release
date.
Currently,
javascript
version
is
under
review
and
it
might
be
released
like
this
week,
maybe
not
this
week
but
like
next
week,
maybe
in
the
week
after
the
next
one,
but
like
very
soon
like
it's
soon
enough.
C
Yeah,
it
will
be
just
early
version
of
the
like
early
version
that
we
want
to
give
to
early
adopters,
just
like
we
did
with
the
last
version
initially
that's
why,
but
we
want
to
so.
Our
water
plan
is
to
get
a
rust
version
and
js
version
out
of
the
door
by
the
end
of
this
quarter.
E
So
I'm
not
sure
we
have
a
proper
place
to
to
discuss
which
solution
we
will
prefer
in
the
end,
but
I
have
a
repo
where
I'm
playing
with
the
different
databases,
I'm
also
trying
to
think
about
a
little
bit
new
structure,
so
it
will
not
be
like
a
super
another
database.
E
E
D
D
C
Well,
there
were
a
few
other
comments,
questions
in
the
in
the
chat,
so
I'll
just
shoot
them
here.
So
there
was
a
question
regarding
suggestion
regarding
using
s3
json
data
lake
type
solutions
is
a
like
solution
to
analytical
type
of
queries
and
that's
exactly
what
we
are
exploring
as
well,
but
it
seems
like
it
will
suffer
from
latency
over
the
response
you
get
with
any
mapreduce
solution.
You
are
usually
in
realm
of
at
least
several
seconds
in
the
very
best
case.
C
Usually
it
takes
minutes
to
to
get
some
result
from
scanning
the
whole
data
set.
So
let's
say
like
a
backup
solution.
For
now
we
tried
to
hit
something
in
realm
of
seconds
of
delay,
so
the
queries
would
take
like
milliseconds
for
very
simple
ones
and
maybe
up
to
tens
of
seconds
for
complex
ones
that
that's
the
ideal
goal.
D
I
I
guess
so
it
is
an
elaboration
on
that
question
about,
like
dropping
the
view,
client
actor
from
the
lake
indexer.
Yes,
because,
like
a
view,
client
is
a
part
of
like
from
the
new
york
world,
some
sort
of
like
undocumented
stuff,
that
it
is
possible
to
use
from
indexer
framework,
but
not
from
lake
one
and
like
it
is
intended
to
drop
this
like
because
we
would
try
to
avoid
new
york
war
as
a
dependency
in
the
first
place.
D
So
I
I
can't
like
I
believe
in
most
cases,
if
you
need
something
like
your
use
case
involves
some
sort
of
querying
the
blockchain,
and
you
don't
have
like
a
hard
requirement
to
to
query
like
your
own
node.
You
can
freely
use
the
newer,
json
or
pc
client
address,
which
is
like
an
api
for
the
json
rpc,
and
you
get
the
answer
yeah
and
like
something
about
interruption
enough.
A
Yeah,
so
he
followed
up,
he
said.
As
for
the
interruption
enums,
when
starting
lake
indexer,
you
need
to
manually
input
the
block
height,
but
there
is
you
there
used
to
be
an
option
to
start
the
indexer
from
interruption
without
having
to
input
the
height.
Is
that
the
pipeline,
as
well.
D
Like
in
the
pipeline,
we
have
a
task
to
make
it
starting
from
the
latest
block
like
the
from
the
index
or
framework.
It's
the
start
from
latest
from
interruption
might
be
implemented
on
your
side
like
we
don't
have
it
in
the
pipelines
and
it's
actually
the
first
time
we
like
got
the
request.
D
So
you
please,
like
feel
free
to
open
the
issue
with
like
this
kind
of
request,
and
we
can
consider
it,
but
I'm
not
sure
we
actually
wanted
to
add
anything
like
storage,
additional
storage
to
like
to
the
near
lake
framework,
but
it
is
discussable.
I
really
encourage
you
to
open
the
ishon
github
and
we
can
like
collect
different
thoughts
like.
C
With
like
indexer
framework,
you
can
easily
now
you
can
easily
implement
many
of
the
things
you
could
not
implement
with
indexer
framework
original
one,
and
this
particular
one
start
from
interruption
is
very
easy
to
implement
on
your
side
when
you
process
your
message,
you're
in
the
block
after
you
finish
processing,
you
just
wait
for
this
support
height
somewhere
in
like
on
disk
and
then.
D
Interrupting
you
I
I
suggest
like
we
can
create
some
sort
of
tutorial
on
this
regard
and
put
it
in
docs
and
it
might
be
useful
for
those
who
want
to
like
this
feature.
So
I
guess
we
can
add
this.
C
A
D
It
depends
on
the
number
of
shards
in
the
readme
section
of
cost
estimation,
we
put
a
formula
and
our
estimations
were
done
for
the
four
shards
like,
and
you
can
easily
follow
the
formula
and
change
the
number
of
shards
and
got
the
like
numbers.
We
can
even
implement
some
sort
of
calculator.
C
But
it
will
never
exceed
the
cost
of
running
excerpt
framework
because
for
indexer
framework
you
would
need
to
run
a
node
per
chart
and
8
800
500
usd
per
month
is
unbeatable
challenge
here
and
with
with
the
with
the
lake
framework,
it's
in
realm
of
adding
maybe
five
bucks
per
chart,
or
something
like
that
in
terms
of
like
read
and
cost
on
from
history.
A
One
more
question:
are
you
guys
working
on
a
dimensional
model,
design
for
the
near
block
chain
like
dimensional,
user,
dimensional,
contract
and
fact
transfer
tables?
If
yes,
can
you
share
your
design?
If
no,
are
you
aware
of
any
web
three-dimensional
modeling,
blog
or
reference
material,
so
we
can
make
a
near-dimensional
model
ourselves.
D
According
to
the
polls
and
the
silence,
I
guess
we
are
not
even
aware
of
what
dimensional
model
design
is,
so
the
answer
is
no
we're
not
working,
unfortunately,
can't
share
anything,
because
we
have
heard
it
for
the
first
time
and
we
really
encourage
you
to
actually
like
do
this.
If
you
want
why
not
like
even
more
cool
projects
based
on
you.
A
D
And
while
you're
dropping
like,
I
really
want
to
thank
you
for
like
some
ideas
for
tutorial
and
encourage
everyone
to
like
visit
our
github
repo
of
the
near
indexers
docs
and
put
any
requests.
You
actually
want
to
see
as
a
tutorial
or
article
for
the
documentation,
because
there
are
a
lot
of
like
topics
we
can
and
want
to
discuss.
But
it's
really
hard
to
like
prioritize,
and
I
guess
we
would
like
to
prioritize
it
based
on
your
requests.
So
please,
like
don't
be
shy.
A
Yeah
absolutely
can
I
find
the
video
the
presentation
deck
of
the
of
this
session
somewhere
I'd
like
to
share
with
my
team
yeah
absolutely
after
this
rule
ends,
hopefully
today
or
tomorrow
at
the
latest.
This
will
be
uploaded
at
on
youtube:
youtube
forward,
slash
near
protocol
we'll
have
this
presentation
there,
as
well
as
the
slides
and
all
the
links
and
where
you
can
participate
yeah
and
again,
just
to
really
reiterate
what
was
just
said:
we
really
want
developer
feedback
community
feedback.
A
A
I
would
say
right
now
just
come
to
the
the
near
dot
chat
and
come
to
the
near
channel.
We
have
a
validator
section
in
there,
but
mainly
under
the
engineering
aspect.
If
you
have
any
questions,
there's
like
developer
support
post
there
yeah,
you
don't
have
a
specific
data
platform
team
channel.
A
Maybe
if
that's
something
that
people
want
to
have,
we
can
talk
with
the
team
about
it
to
have
something
specific
to
that,
but
yeah
feel
free
to
jump
in
there
and
also
we
have
office
hours
every
day
twice
a
day.
If
you
go
to
near.org
forward
slash
office
dash
hours,
you
can
see
twice
a
day
at
devrel
having
live
sessions
that
you
can
ask
questions
we'll
answer
the
best
we
can
and
forward
what
we
don't
know
to
the
smarter
individuals
on
this
call
for
all
you
got
something.
C
C
So
it's
better
to
open
up
the
conversation
on
our
issues
or
discussion
boards
and
on
github,
and
that
this
way
we
can
always
get
back
into
conversations
regarding
certain
aspects
later
on
and
revisit
our
decisions
and
also
keep
several
tracks.
At
the
same
time,
discussing
analytical
data
solutions
versus
event-based
ones,
so
yeah.
A
I
guess
just
one
comment
thanks
so
much
for
doing
this.
An
amazing
project
running
running
lake
indexer
has
saved
my
team
so
much
time
and
money.
I'm
super
pumped
to
see
how
this
project
grows.
If
you
need
any
beta
testers,
you
know
where
to
find
me.
Go
team.
A
C
Yet
I
guess
the
one
that
we
already
shared
okay,
near
lake
flows
into
sql
base
issues.
A
Cool
awesome,
yeah
and
again,
once
we
post
this
on
youtube,
we'll
make
sure
you
have
all
these
links
available
for
you
cool
word
yeah
just
about
out
of
time.
I
want
to
thank
everyone
for
joining
us,
yeah
really
stoked
to
have
these
community
sessions
get
this
live
feedback
and
engagement,
so
yeah.
This
meeting
will
take
place
every
last
thursday
of
the
month
and
feel
free
to
join
us
every
thursday.
We're
having
community
meetings
first
week
is
gonna,
be
dev
console,
slash
explorer.
Second,
one
is
the
protocol.
A
Anything
nep
related
or
for
the
core
team
join
third
week
is
tooling,
so
anything
developer
tools
come
join
and
then
yeah
again.
This
one
is
the
data
platform
team,
so
yeah
excited
to
have
these
continuing
every
month
and
yeah
feel
free
to
to.
Let
us
know
how
we're
doing
give
us
comments,
suggestions,
we're
we
love
feedback,
so
thanks
so
much
for
joining,
really
appreciate
it.
Thank
you.