►
Description
Project Heartbeat
00:00- Intro
03:20- 3.0 Released 🚩! https://nebula-graph.io/posts/nebula-graph-v3.0.0-release-note/
03:42- Master Core
- Prop pruner (https://github.com/vesoft-inc/nebula/pull/3750)
- `--disable_page_cache` (https://github.com/vesoft-inc/nebula/pull/3890/)
- GetProps LIMIT push down (https://github.com/vesoft-inc/nebula/pull/3839)
05:43- Spark-Connector and Algo(2.6.2)/Exchange(2.6.3)
- fixed address invalid when it's domain name
06:40- Nebula-Contrib
- php, .net, nodejs client moved here
07:38- Topic: Nebula Graph Index Explained
- 📚 https://www.siwei.io/en/nebula-index-explained/
A
A
Can
you
hear
me
yes,
thank
you,
cool
cool
cool
welcome.
Shall
we
yeah.
A
Yep,
okay,
so
we
will
introduce
new
members
way
same
pronunciation
of
my
name
and
now
and
then
we
will
update
the
news
in
our
project
project.
A
Today
I
will
I
prepared
one
ad
hoc
topic,
which
is
aiming
to
introduce
nebulograph
index
and
after
that
we
will
have
the
sync
discussion
as
europe,
but
probably
we
don't
have
that
either
because
we
only
have
way
so
first
wait.
Could
you
introduce
yourself.
B
Yeah
sure
hi
everyone,
my
name,
is
weishan,
I'm
currently
a
content
marketing
specialist
at
nemograph,
I'm
thrilled
to
join
the
community.
A
Cool,
thank
you
so
yeah.
We
will
hear
more
from
way
in
the
future
from
community,
and
maybe
you
will
see
his
articles
in
your
future.
A
And
so
we
will
quickly
go
through
how
we
are
working
every
meetings
in
the
coming
months
as
because
a
lot
of
people
may
join
us
in
the
first
time.
So
we
will
have
this
scheduling
in
bi-weekly
and
every
time
before
this
meeting
you
ever
anyone
can
propose
their
topics.
Their
concerns
think
they
would
like
to
be
discussed
in
a
safe
fashion.
A
You
can
put
it
in
our
documentation,
that's
online
ether
pad
it's
a
wiki
page
and
during
the
meeting
we
will
have
a
introduction,
the
project,
heartbeats
and
open
floors
for
discussion,
and
also
we
will
have
ad
hoc
topics.
If
there
are
any
proposals
that
you
prepared
and
ideas,
you
want
to
bring
up
to
us
any
stories
you
would
like
to
share,
or
you
made
this
related
to
nabla
graph
or
any
other
thing
that's
relevant
and
we
will
archive
everything
in
the
text
and
the
video
and
put
them
on
the
on
our
wiki
page.
A
So
anyone
can
check
any
of
the
meeting
so
today
I
I
will
go
through
the
project
heartbeats
one
of
big
thing
is
we
finally
released
the
3.0
last
week,
please
check
out
from
documentation
release
note
for
some
details
and
anything
you
want
to
check
out
without
just
ping
us
in
the
slack
channel.
A
A
So
with
this
pr
yeah,
one
of
our
contributors
made
it
more
precisely
when,
when
the
executor,
when
the
planner
is,
is
planning
to
fetch
the
actual
values
or
properties,
so
it
is
it's
more
of
optima
optimized
dura
either
during
the
planning
phase.
Another
thing
is
from
when
how
that
he
introduced
another
flag.
Another
configuration
slack
named
disable
page
cache,
which
is
to
expose
the
equivalent
configuration
from
the
roxdb,
that
is
to
be
able
to
optionally
disable
the
operating
system
page
cache.
A
So
this
is
helpful
for
some
certain
cases
or
but
be
careful
with
that.
If
we
disable
this
page
cache,
which
is
by
default,
enabled
please
ensure
your
blog
cache
is
config
enough
or
you
will
have
interface
in
sufficient
memory.
A
Another
thing
is
the
from
sherlock
that
we
eventually
we
we
have.
This
get
props
limit
push
down
to
the
storage
site,
and
this
is
the
related
pr.
So
it
is
one
of
the
tasks
we
are
making
more
things,
filter
or
other
things
being
pushed
down
to
storage
site,
which
is
for
sure
improving
our
performance.
A
So
this
is
the
core
part.
Then
it
comes
related
to
our
spark
connector.
A
Actually,
this
is
a
bug
fix
that
I
made
in
last
before
last
meeting
and
in
last
two
weeks,
the
upstream,
the
downstream
of
the
java
client
were
also
lifted,
and
we
made
a
hot
fix
release
upon
this
bug
fix,
which
is
in
in
spark
connector
and
algorithm,
which
is
2.6.2
and
then
exchanges
2.6.3.
A
So
it
just
fixed
the
case
that
you're
running
nabla
graph
on
docker,
compose
or
kubernetes,
where
the
endpoint
exposed
out
is
a
domain
name
rather
than
the
fp
address,
so
that
is
supported
with
only
with
this
version
afterwards.
A
Another
big
thing
is:
we
have
moved
a
couple
of
our
sdk
projects,
projects
from
the
original
authors
to
to
our
github
organization,
which
is
nebula,
contribute,
and
now
we
have
php.net
node.js
clients,
projects
there
so
check
out,
be
sure
to
check
out
those
repositories
and
that's
all
from
today's
and
heartbeats.
A
So
then
we
move
to
the
add-on
topics
and
now
there's
only
one
from
my
site
and
I
actually
compose
a
a
blog
in
this
url
is
in
english,
I'll
mod
details.
Then
I
will
switch
to
that
slice.
Okay,
I
will
make
a
separate
video
for
that
later,
as
well.
A
Okay,
so
actually
you
know,
index
for
database
is
is
quite
normal
and
but
it's
slightly
different
in
in
a
graph
database,
especially
in
level
graph.
A
So
I
think
the
fresh
users
are
newbies
to
the
graph
world
will
be
confusing
for
some
time.
So
that's
why
I
decided
to
have
this
topic
revisited
and
prepare
some
more
things
previously.
I
I
write
a
sketch
I
will
share
later,
but
now
I
put
more
things
on
this
topic
now,
so
we,
I
will
explain
from
why
and
what
never
graph
index
is
so
this
is
so.
I
will
review
on
on
where
the
the
index
will
be
used
in
a
query.
A
So
in
one
word
is
only
the
in
nabla
nebulograph
indexes.
It
will
be
only
be
used
during
the
scenario
that
we
want
to
get
data,
which
is
the
vertex
or
edge
from
property
conditions.
A
So
this
is
the
one
word
explanation
when
it
comes
to
a
query
is
something
like
this
like
we
are
doing
lookup
on
on
tag
one
with
the
condition
on
property
of
column,
1
on
column
2
on
this
hack
and
that's
all
the
information
we
have
to
fetch
the
related
data
from
or
here
we
want
to
fetch
all
the
vertex
on
this
tag
with
this
conditions.
A
So
this
is
a
typical
minor
case
that,
where
the
nebulograph
index
is
so
actually,
if
we
look
into
this
query,
it's
it's
actually
quite
similar
to
we
like
when
we
do
in
the
rdbms,
when
we
are
doing
select
star
or
select
something
from
a
table
where
specific
columns
have
some
conditions
and
but
but
we
have
the
experience
or
knowledge
to
knowing
that
this
is
by
default,
the
basic
function
of
a
tabular
rdbms
system.
A
So
we
don't
need
to
create
as
explicit,
create
any
index
to
enable
that
query.
Actually,
this
is
because
by
design
actually
with
without
introducing,
but
we
can
do
that-
we
can
do
a
index
creation
in
rdbms
to
accelerate
the
specific
condition.
Query
like
an
underlying
the
index
in
in
any
database.
It's
just
a
duplicate
data
comparing
the
source
of
truths
of
the
raw
data
in
in
your
in
our
database,
but
in
a
sorted
way.
A
Basically,
you
can
like
you
can
create
a
index
for
for
the
table
like
maybe
it's
also
called
tag
one
you
want
to
create
an
index,
that's
sorting
both
column,
one
and
column
two,
and
that
the
database
will
create
a
duplicate
replica
or
another
data,
sorted
based
on
the
indexed
fields,
and
that
is
helping
your
when
you're
reading.
A
When
you
are
doing
the
reading
thing,
it
will
accelerate
for
you
right,
but
if
you
don't
create
that
you
can
still
do
the
query,
but
the
things
in
enable
graph
or
graph
database
is
different,
because
the
data,
the
source
of
choose
data
had
raw
data
in
the
graph
database
was
saved
or
persistent
or
persistent
in
a
fashion.
That
is
the
connection
oriented.
A
For
this
reason,
the
raw
data
is
not
good
at
doing
something
like
this.
This
is
the
typical
tabular
pattern
and
it's
more
expensive
when
you're
doing
this
and
with
underlying
is
the
full
scan
of
the
data,
because
we
are
distributed
oriented
design.
So
that's
even
worse.
So
for
that
reason,
naval
graph
decided
to
prohibit
it
from
this
kind
of
query
pattern
without
creating
a
index.
A
So
this
is
a
big
difference
between
the
nablograph
index
and
the
index
from
the,
for
example,
mysql,
and
so
I
mentioned,
as
I
mentioned,
this
is
the
typical
graph
query,
which
is
that
we
have
a
starting
point
and
we
are
doing
the
jump.
A
So
the
things
that
make
a
difference
between
the
a
query
with
index
and
without
index
is
for
it
is
on
the
starting
point
actually.
So
this
is
the
the
graph
query
even
in
in
the
start
point
because
we
are
starting
from
a
specific
vid.
That
is,
we
don't
do
the
the
query
to
to
check
to
get
a
data
from
or
get
a
vertex
from
our
property.
A
So
we
already
have
a
starting
point,
and
so
this
kind
of
query-
actually
you
can
still
put
where
clause
1
larger
greater
than
one
here
but
still
is
underlying-
is
not
leveraging
or
requiring
index
actually.
A
So
this
is
an
example
that
comparing
to
that,
but
but
in
real
world,
actually
this
part
you
can.
You
can
use
a
lookup
based
on
based
on
index
and
pipeline
after
the
pipeline.
Still
the
other
side
are
the
graph
query
so
from
anyway.
From
this
point,
they
are
all
not
requiring
the
index
actually,
so
we
we
have
the
feeling
and
to
know
that
index
was
used
to
get
vertex
from
a
property
condition,
but
it's
not
always
the
case.
Actually,
there
are
exceptions
like
that.
A
After
afterwards,
I
will
share
you
some
implementation,
implementation
of
the
index.
You
will
know
that
there
are
certain
limitations
on
the
native
or
nabla
graph
index,
so
in
some
cases,
which
is
typical,
like
a
world
card
or
regular
expression,
condition
on
the
properties
that
we
want
to
search
the
vertex
from
the
property
condition
and,
namely,
we
call
them
full
text
search
in
this
certain
case.
A
We
cannot
use
the
index
actually
and
in
this
case
we
create
a
function
called
the
full
text,
search
like
all
the
other
graph
databases
they
do
the
similar
design,
and
in
this,
in
this
case
we
don't.
We
don't
save
the
index
data
in
enable
graph
cluster.
A
Instead,
we
put
the
index
data
the
duplicate
of
this
information
outside
of
the
of
the
cluster.
Here
we
are
leveraging
a
elastic
search
cluster
and
we
are
making
the
call
and,
during
the
during
the
write
pattern
when
we
are
creating
or
manipulate
data,
the
the
index
data
will
be
updated
in
the
async.
We
there
will
be
a
rough
listener
to
do
that,
and-
and
that's
but
we're
not
focusing
on
this
topic,
we
will
have
the
topic
related
to
this
function
in
the
future.
A
So
that's
all
of
the
what
and
why.
So?
What
that
nebulograph
index
is,
is
a
data
that
indexing
the
property
data
to
help
us
to
get
the
vertex
or
edge
from
the
property,
and
why
is
because?
Oh,
why?
We
need
it,
because
we
that's
the
way
in
our
database.
We
want
to
search
on
the
filtering
on
the
conditions
fast,
so
we
need
a
sorted
data,
but
why
is
is
the
must
or
mandatory
to
doing
this
queries,
because
the
graph
database
is
designed
differently
from
the
traditional
rdbms?
A
Yeah,
this
is
what-
and
this
is
why-
and
I
will
have
more
information
on
on
the
implementation
side
of
this
index
design.
So
this
is
the
the
sketch
I
mentioned
before.
I
draw
this
a
couple
of
months
before
last
year,
so
you
can
fetch
it
here.
A
We
can
see
okay,
okay,
so
we
can
see
that.
So
this
is
the
never
graph
architecture,
and
this
is
the
meta.
This
is
storage
d
and
this
diagram
is
showing
how
we
are
saving
data
in
the
underlying
rock
cb.
A
The
antarctic
base
key
value,
storage,
and
normally
this
is
the
this-
is
a
vertex
data,
so
for
one
vertex
per
tag,
we
will
have
this
entity
in
rock
cb
and
the
key
part
is:
there,
are
type
partition,
vertex
id
and
tag
id
and
in
the
value
part,
these
those
are
the
the
the
properties
of
the
key
value,
and
this
this
is
the
where
the
index
data
comes
from.
A
So
if
we
create
certain
vertex
for
certain
a
certain
index
for
certain
tag
and
there
will
be
actual
actual
entities
here
now
you
can
see
this
is
the
index
id.
This
is
the
index
binary
and
it.
This
is
this:
binary
file
actually
covers
the
our
indexed
sorted
property
data
like
this
is
the
column,
one
this
column
two
value
and
then
then,
at
the
end,
this
is
the
vid
and
they
don't
use
the
value
fields
in
their
storage.
A
So,
as
we
mentioned
in
this
query
is
actually
we
are
doing
the
scan
of
this
of
the
index
data
to
find
vid
from
given
indexes
or
or
indexed
property
conditions.
A
Okay,
one
thing
I
did
I
forgot
to
mention
is:
I
will
say
it
later.
That
explains
why
there
is
limitation
on
the
index.
It
is
left
match,
because
underlying
we
are
persistent
data
in
this
way.
Underlying
index
again
refers
to
the
left
prefixed
scan
in
in
the
roxdb,
and
that's
why,
if
you
are
you're
creating
a
composite
index
which
is
you
are
indexing
attack
more
than
one
property
just
like
this?
A
There
are
at
least
two
of
them
so
here
the
the
order
matters
right,
because
you're
doing
prefix
scanning
another
thing
is
I
did
I
forgot
to
mention-
is
this:
data
is
sharded
together
with
the
vid
of
the
vertex
data,
that
is,
it
will
be
stored
and
sharded
distributed
this
exactly
the
same
way
together
with
the
the
vertex
id.
We
don't
put
it
separately
in
a
centralized
way
or
in
a
separate
sharding
policy.
So
it's
started
together
with
the
vertex
id
and
we
can.
We
can
guess
that.
A
Certainly
there
are
actual
efforts
when
we
are
doing
the
right
thing.
When
we
manipulate
or
insert
data
the
related
index,
actual
lines,
actual
entities
will
be
created
and
that's
some
kind
of
you
know
the
trade-off
it's
expensive
during
the
the
right
path,
but
it's
helping
our
read
pass.
A
A
But
generally
it
will
read
all
data
first,
build
old
index
key
and
remove
the
old
index
and
then
comes
to
the
insert
new
data.
So
this
is
you
know
this
is
the
re.
This
the
complexity
comes
from
the
from
the
nature
of
the
you
know
the
the
distributed
system.
You
we
want
to
ensure
the
consistency
right.
So
there
are
actual
random
reads
here,
so
this
read
is
considered
very
you
know
quite
expensive
in
lsm
designed
stories
because
the
you
know
the
right
thing.
A
You
can
do
it
in
a
sequential
way,
but
random
rate
comes
here,
but
it's
it
cannot
be
avoided
anyway.
So
we
should
carefully
decide
whether
we
need
to
have
the
query
of
the
index
query
when
we
are
designing
the
schema,
the
data
modeling,
so
you
can
make
the
vid
design
quite
meaningful
that
you
can
easily
find
a
certain
vertex
without
being
done
from
the
property
condition,
but
sometimes
it
cannot
be
avoided.
A
So
we
can
create
index
anyway
only
when
it's
needed-
and
this
is
the
read
pad-
read
path
and
when
it
comes
to
sorry
the
right
when
it
comes
to
read
path.
This
example
we're
doing
this
in
the
same
lookup
query
I
mentioned
before,
and
this
was
mainly
done
from
the
graph
d,
which
is
here,
and
the
query
was
comes
here.
It
will
be
validated
and
and
parsed
by
into
a
ast
and
in
cases
index
involved,
it
will
select
from
existing
index
in
case
the
needed
index
is
not
there.
A
A
I
create
two
index
on
same
tag,
one
on
same
set
set
of
the
column,
which
is
called
welcome.
Two
and
I
made
two
different
index
in
different
order,
but
here
we
are
actually
the
the
conclusion
is
in
this
case
it
will
choose
the
second
index
where
the
column
two
is
in
in
the
first
order.
A
The
reason
here
is,
we
are
using
the
row
based
optimization,
so
this
was
optimized
based
specific
roads
and
in
this
rows
in
this
row
it
assumes
that
there
are
two
filters
here
that
a
equal
filter,
ideal
condition
equal
condition
will
be
faster
when
it
will
be
filtered
first.
A
You
know
this
is
the
one
point,
and
this
is
a
range
right.
So
this
row
was,
you
know,
designed
defined
in
its
optimization
ruling.
You
know
there
are
certain
cases
that
is
not
the
case,
but
generally
this
is
helpful
and
in
the
future,
hopefully
we
can
design
the
cost
based
optimization,
but
now
we
are
doing
it
in
a
row
way.
So
we
are
choosing
the
second
index
here
and
then
the
after
the
executing
plan.
That
gravity
is
querying
data
from
the
storage
d.
A
This
request
will
be
sent
out
to
all
the
storage
d,
and
this
is
because
we
are
do.
We
are
designing
this
in
a
local
fashion
where
the
index
data
is
also
distributed.
It
is
together
with
the
data
which
is
sharded,
so
we
have
to
fan
out
to
all
the
related
stories
d
and
the
stories
d
will
do
the
index
scan
for
that
where
we
already
know
so.
This
is
the
read
past.
So
I
ideally,
if
you
know
more
details,
it's
helping.
A
You
understand
how
you
want
to
leverage
and
how
you
want
to
make
the
trade-off,
for
example
like
when
we're
doing
lookup
the
top
end,
and
the
limit
filters
here
will
be
pushed
up
to
push
down
to
the
storage
d,
so
the
data
transformed
will
be
much
smaller
and
we
are
more
making
more
things
on
pushing
down
here
once
so.
A
One
thing
to
mention
is
that
from
the
3.0
we
we
now
support
this
kind
of
data
without
leveraging
the
index
so
which
is
the
different
way
as
I
mentioned,
but
there
is
one
limitation
that
you
have
to
put
a
limit
here,
because
the
limit
filter
now
can
be
pushed
down
to
the
storage
site,
and
that's
why
we
love
it.
We
don't
prohibit
this
pattern
of
query
without
index,
so
with
the
limit
being
pushed
down.
So
we
can.
We
don't
need
to
do
the
full
scan,
so
it's
not
that
expensive
as
before.
A
So
this
is
a
new
pattern
we
support
underlying
without
introducing
the
the
index,
but
actually
it's
still
not
efficient
or
fast
when
it
comes
to
read
when
when
we
are
creating
a
index
but
but
when
it
comes
to
write
pass,
if
your
queries
just
want
to
sample
data,
you
can
you
can
accept
the
limit
when
it
comes
to
right
passes,
it's
much
expense,
it's
much
cheap,
because
it
will
be
much
faster,
faster
without
introducing
a
index.
A
So
the
conclusion
again
is
use
index
only
when
you
have
to
and
but
how
we
we
want
to
use
it.
So
I'm
not
going
to
in
a
detailed
way,
because
it's
all
a
document
in
the
documentation
we
can
create
index
in
this
clause
create
index,
and
once
one
thing
to
mention
is,
as
we
mentioned,
this
data
will
be
during
when
we
are
inserting
our
manipulated
data.
It
will
be
done
in
this.
A
This
path
in
a
blocking
or
sync
way,
but
how
about
we
already
have
a
tab
and
we
have
we
have
a
thousand
vertex
is
inserted,
and
then
we
create
index
so
in
that
case
that
those
1
thousand
vertex
inserted
before
we
create
in
that
won't
be
created
for
the
index
entities
and
how
how
about
that
data
it,
because
it
was
only
done
in
a
thinking
in
the
block
thinking
way.
A
So
we
need
to
rebuild
index
to
make
those
existing
data
to
be
created
for
the
index
part
and
that's
why
we
need
to
rebuild
a
index.
Don't
forget
that
that's
a
pitfall
for
new
users
when
we're
using
index,
including
me,
we
need
to
rebuild
index
if
we
create
a
index
on
top
of
existing
data,
and
that
is
async
async
job
task
and
you
can
use,
show
index
status
to
check
if
it's
started
or
if
it's
can
be
completed,
and
when
it
comes
to
a
query
based
on
index.
A
A
A
If
you
are
doing
explain
or
profile
square,
we
will
know
the
starting
point
is
here:
it
will
do
index
scan
based
on
name
of
the
player
and
find
the
auto
v,
and
afterwards
there
will
be
the
the
guest
neighbors
and,
from
this
point
again
neighbors
to
get
all
the
connected
v2
the
index
is
not
used.
So
please
be
noted.
A
Index
is
only
used
in
this
pattern
and
similarly,
this
is
the
equivalent
query
that
we
are
doing
it
in
the
ntql,
so
this
in
in
before
the
pipeline,
we
are
doing
the
index
query
and
after
the
pipeline
we
are
doing
a
graph
query.
So
this
is
the
graph
query.
This
is
the
index
query.
This
is
the
graph
query.
This
is
the
index
query
and
yeah,
let's
recap
so
index.
What
is
index
oops?
I
will
stop
it.
A
Index
is
sorting
sorted
property
data
to
help
us
to
find
certain
data,
which
is
the
vertex
and
or
edge
from
a
property
condition
or
pure
property
condition.
If
your
condition,
including
the
vertex,
you
don't
need
index.
A
Yeah
index
is
not
for
graph
work
or
graph
query,
as
we
mentioned.
If
we
are
doing
get
neighbors,
the
inaccess
data
is
not
used
in
that
case
index
is
only
left
match.
If
you're
doing
some
query
that
cannot
be
fulfilled
but
left
match.
Maybe
you
are
looking
for
the
full
text
scan
or
you
want
to
re
redesign
your
data
and
we
should
use
it
carefully
and
there
are
certain
costs
which
is
not
cheap
when
we
are
writing
to
data.
A
B
I
mean
no,
I
this
is
quite
a.
This
is
a
quite
noble
topic
to
me,
so
I
don't
have
any
questions.
Just
I'm
just
reading
your
blog
about
a
nemographic
index
and
just
get
started
so
yeah.
Thank
you.
A
Cool,
thank
you.
So
we'll
call
this
the
end,
because
we
we
don't
have
other
guys
joining
this
time,
other
than
wait
so
so
be
sure
to
subscribe.
Our
register
our
meeting
in
zoom,
so
you'll
get
reminded
alert.
A
We
are
doing
this
bi-weekly
for
now,
so
maybe
we
are
changing
our
time,
so
we
notice
that
this
time
is
not
friendly
for
europe
guys.
So
maybe
we
are
doing
that.
But
if
you
have
a
preferred
time
stock,
please
be
sure
to
let
us
know
and
our
topics
will
be
on
this
shared
ether
pad.
So
you
can
put
yours
here
before
any
meeting
and
be
sure
to
check
out
our
stack
channel.
A
Oh,
the
one
last
big
thing
is
that
we
make
our
cloud
version
of
naval
graph
as
a
service
managed
service
open
beta
in
in
last
week,
and
now
you
can
search
the
graph
database
in
in
in
your
azure
the
microsoft
cloud
offering
azure
portal
like
this.
You
can
find
this
nebula
cloud,
another
graph
cloud
and
it's
in
a
quite
cheap
price
in
the
beta
period,
so
be
sure
to
check
it
out,
and
we
will
call
this
the
end
so
see
you
bye,
bye
way.