►
Description
Ad-hoc topic: Nebula-UP (v2) All-in-one introduced
Now Nebula-UP can enable a lot of nebula graph tools to be installed in one line for fresh users to play and test:
👉🏻 https://github.com/wey-gu/nebula-up
🚩 Project Heartbeats
- Remove useless AppendVertices https://github.com/vesoft-inc/nebula/pull/4277
- `MATCH (v)-[:like]-() return v` no need to append dst vertex when only returning src
- Explain/Profile Dot format fixed https://github.com/vesoft-inc/nebula/pull/4280
- `query_concurrently` bugfixes https://github.com/vesoft-inc/nebula/pull/4288
🎙️ Hao Wen will talk about Nebula Graph in Sigmod 👉🏻2022.sigmod.org/program.html
A
A
Okay,
we
will
start
hello.
Everyone
super
exciting
to
have
another
nablo
graph
community
meeting.
We
will
get
start
so
agenda,
so
as
euro
we
will
have
new
members
in
introduction.
When
someone
joined,
we
will
start
first
and
we
will
have
this
meeting
bi-weekly,
so
everyone
can
join
us
and
bring
your
own
story
or
your
proposal.
Your
topics
to
be
discussed
in
in
this
sink
platform
or
time
period,
and
if
you
want
to
raise
your
ideas
before
a
meeting
feel
free
to
join
our
slack
channel.
A
So
today
I
will
go
through
the
project
heartbeats.
First
then,
I
will
give
a
a
topic
around
nebula
app,
which
is
a
set
project.
A
I
created
to
help
our
fresh
users
to
quickly
set
up
a
cluster
of
naval
graph,
and
but
I
have
some
news
to
to
to
share
in
in
into
this
meeting,
so
we
we
don't
have
too
many
prs
being
merged
in
in
last
two
weeks,
but
we
actually
have
a
bunch
of
pending
being
revealed,
prs
ongoing
and
they
were
targeted
to
be
emerged
in
upcoming
three
hours
or
two
weeks
and
that's
for
the
major,
the
milestone
of
the
3.2.0,
and
so
I
will
mainly
introduce
three
of
our
prs.
A
The
first
one
is
to
remove
the
useless
append
vertex
operators,
so
this
is
this
is
a
actually
a
optimization
row
that,
in
certain
case,
for
example,
in
in
this
case,
we
want
to
only
only
the
pattern
to
be
matched
will
be
a
vertex
out
going
in
edge
typed
like
to
some
other
non-tapped
tag.
A
A
So
previously
we
still
are
adding
a
append
vertex
operators
in
the
plan,
but
now
it's
it's
precisely
controlled
and
removed
and
that's
helping
the
the
performance.
A
So
this
is
the
first
optimization
pr
and
the
second
one
is
previously
in
in
3.1.
We
have
some
issues
in
explain
a
given
query,
and
maybe
you
don't
know
that
we
can
choose
a
format
of
explain.
You
want
to
check
the
execution
plan,
offer
query,
and
by
default
is
output
in
human
readable
table,
but
optionally
you
can
specify
the
format
as
dot,
so
it
will
instead
generate
the
the
the
text
format
dot
file.
A
If
you
copy
that
dot
format
string
into
a
graph
vz
or
something
equivalent,
you
can
render
a
a
graph
view
of
how
this
query
was
done
with
plan
aid
from
one
a
query
from
one
operator
to
another
and
they
said
dag.
So
it's
broken
somehow
and
now
it's
fixed
another
one
is
round
3.1.0
we
bring.
We
have
more
aggressive
default
concurrently,
query
policies
and
that
introduced
some
bugs
in
certain
like
index
related
and
some
other
cases,
and
they
were
fixed
in
this
pr.
A
But
if
you,
if
you
look
into,
if
you
look
into
the
open
pr
with
the
milestone
3.3.2,
you
will
see
a
bunch
of
optimizations
out
there
like
fan,
shot
his
path
and
remove
useless
projects
and
push
down
of
specific
filtering
operators.
So
that's
pretty
exciting
ones.
A
So,
please,
if
you're
interested
just
check
out
our
github
repo
and
that's
main
part
of
the
updates
today
for
the
project.
So
then,
finally,
I
will
give
you
my
ad
hoc
topic.
It's
about
nebula
op.
I
I
believe
many
of
you
already
know
that
his
it's
a
wine
liner
tour
to
help
you
set
up
even
even
help
you
in
store
bro.
A
If
you
don't
have
on
your
mac
os
and
then
install
docker
and
docker
compose
and
then
put
everything
related
on
level
graph
for
you
to
started
to
get
started
to
play
with
it.
Previously,
I
make
it
possible
to
run
the
console
another
graph
cluster
and
the
navigation
studio,
and
I
have
been
planning
to
make
bring
more
components
to
it,
and
in
this
week
I
decided
to
to
make
some
move
and
I
tried
to
bring
the
nebula
dashboard,
which
include
the
promises.
A
The
the
node
exporter,
the
nebula
exporter
and
the
dashboard
itself
is
a
couple
of
different
containers
and
also
I
bring
a
playable,
a
runnable
spark
and
hadoop
cluster
in
a
container
way
together
with
nabla
graph.
A
So
I
create
some
scripts,
so
you
can
use
one
liner
to
just
run,
exchange
or
algorithm
or
even
spark
negative
in
pi
spike,
pi
spark
just
one
line
you
just
give
one
line
it
will
run
it,
for
you
is,
is
pretty
awesome
and
also,
finally,
I
bring
the
backup
and
restore
into
this
network
up
to
so
the
br
of
best
best
backup,
restore,
is
actually
involved
in
in
in
last
quarter,
and
we
can
it's
not
agent
based
with
the
coupled
ssh,
so
it's
quite
more
user-friendly,
so
you
can
make
a
full
backup
of
the
cluster
and
it
will
be,
it
will
automatically.
A
Transit,
the
backup
into
your
either
it's
a
local
file,
local
file
system
or
a
remote
app
s
or
you
know:
s3,
compatible,
objective,
object,
storage,
service
and
yep.
A
So
to
bring
to
bring
the
brs
most
challenging
part.
I
managed
to
hack
somewhat
because
it's
not
designed
planned
designed
for
the
container
use
case,
but
I
made
it
happen.
So
if
you
you're
interested,
you
can
just
take
a
look
into
the
code
so
yeah.
So
what
is
nabla
graph
and
previously
the
the
version?
One
is
just
one
one:
liner,
install
installer
with
the
darker
nebula
graph
and
studio
and
console,
and
you
just
run
it.
A
It
will
help
you
automatically
do
the
stuff,
so
I'm
bringing
the
version
2
now,
so
I
introduce
all
in
one
mode
and
in
this
all-in-one
mode
you
can
have
the
spark
with
the
spark
hadoop
connector
exchange
and
algorithm,
and
you
have
a
dash
dashboard
mode
and
of
course
you
have
all
mode
and
you
have
everything
in
one
go
and
they
will
help
you
install
everything
needed
of
a
dashboard.
You
know,
dashboard
is
a
distributed
services
architecture.
A
It
will
take
you
some
hours
to
make
everything
up,
but
with
with
level
up
dashboard
mode,
you
can
have
everything
in
one
liner
and
the
vr
and
in
this
vr
setup
I
I
help
you
config
and
set
up
the
nebula
agent,
which
was
it's
like
a
side
car
of
all
the
services,
and
I
will
install
you,
the
the
client
cmi
sale
like
vr
client,
and
then
I
will
set
up
a
ming
io
cluster
for
you,
it's
you
can
consider
in
your
products
production.
A
Maybe
it's
your
s3
service,
so
yeah
and
after
you
run
that
you
will
see
some
something
end
up
like
this
and
the
by
default.
The
network
graph
enable
app
will
be
placed
in
your
home
director,
home
directory
in
in
dot
net
blue
up
folder,
and
then
you
can
visit
the
different
services.
I
will
show
you
later
in
a
demo,
so
apart
from
this
handy
one
liner,
I
also
created
a
bunch
of
different
utilities
or
scripts
to
help
you
try
different
things
just
in
another
line.
A
So,
for
example,
this
is
the
output,
so
you
can
see
you
can
use
this
line
to
cd
to
the
all
the
files
involved
in
for
nablab.
You
can
check
up
yourself
and
you
can
use
the
nebula
studio
in
this
part
and
you
can
have
an
apple
dashboard
in
another
port
and
actually
you
can
access
your
mini.
Oh
in
the
administration
interface,
I
didn't
show
it,
but
you
can
explore
it
and
with
this
line
this
is
the
same
as
the
previously
the
version
one
laptop,
but
I
make
it
more
usable.
A
I
will
demo
it
to
you
later.
Of
course
you
can
clear
up
everything
with
this
command,
so
I
put
all
the
handy
utilities
under
this
folder
and
it
will
have
the
extension
name
as
dot
sh.
A
So
after
you
have
this
your
path
environment
set
up
to
this
folder,
you
will
have
all
of
this
working
without
specified
the
abstract,
the
absolute
path.
A
So,
for
example,
you
you
can
run
that
the
br
br
refers
to
backup
and
restore,
so
it
will
help
you
load
data,
br,
command
line
from
a
container
and
run
everything
corresponding
commands
and
help
you
show.
So
I
didn't
in
this
example:
I
didn't
trigger
any
backup,
so
you
can
show
nothing
and
when
you
give
console.assist
you
will
enter
to
the
console
it's
already
logged
into
the
graph
d.
I
will
show
you
more
of
them
later
in
the
demo
and
so
okay.
I
will
demo
it
for
you
set
up
this
okay.
A
A
So
this
is
the
code
repo
here
and
you
will
see
previously.
You
just
call
this
line
to
install
with
the
install
the
siege.
So
this
is
the
v1
version
of
the
level
up.
They
will
have
this
and
you
can
even
specify
different
versions,
and
now
the
all
in
one
mode
is
the
the
one
that
I
introduced
in
this
week
is
the
I
call
it
the
v2,
but
I
didn't
mark
it
in
the
in
the
repository,
and
so
we
support
a
bunch
of
different
surrounding
tools,
and
how
do
you
call
it?
You
just
call
this
line.
A
So
after
you
call
this
line,
it
will
take
some
minutes
depending
on
on
your
network
bandwidth.
You
will
have
a
similar
output,
as
is
as
it
was
before
in
v1.
So
after
that
you
can
see
I
list
everything
in
in
my
home
in
my
level
up
repo
home
folder,
I
will
show
you
everything
so,
for
example,
so,
first
I
export
this
past,
so
I
can
directly
call
console
dot
as
siege.
A
Yes,
I
can
enter
here
so
hosts
yep,
it's
working
and
you
can
actually
call
this
show
posts.
A
A
So
this
is
actually
directly
calling
the
br
for
for
us,
it's
actually
running
inside
a
dot,
dot,
docker,
and
so
what
I
did,
I
you
can
use
the
show.
So
it's
listing
all
your
existing
backups.
So
what
we
can
do
else,
we
can
try
the
backup
it
will
help
you
trigger
a
whole
backup.
A
A
You
can
call
the
the
restore
which
I
I
don't
show
you
here,
but
you
can
try
explore
it
and
what
about
the
others?
Okay,
we
have
exchange
example.
A
A
You
can,
if
you
are
interested
you
can
just
in
this
folder,
you
just
give
a
enter
it.
You
can
see
everything
so,
for
example,
our
spark
related
things
are
here,
so
we
just
no
spark
so
underlying
the
spark
related
workload
are
here:
yeah
we
have
a
hard
open
spark
cluster
and
inside
inside
this
spark
node.
A
I
put
the
exchange,
algorithm
and
spark
connector
jar
package
inside
that,
and
my
script
is
just
directly
calling
that
so
regarding
this
exchange,
I'm
actually
leveraging
this
configuration.
So
it
will.
You
can
see
the.
A
I
will
introduce
more
source
examples
here
with
your
one
liner,
for
example,
I
can
make
it
calling
mysql
and
parsing
data
from
messy
current
syncing
it
to
navigate
the
sync
refers
to
it
will
call
nabla
graph
and
will
it
will
do
inject
the
data
there,
so
the
file
is
called
player.csv,
so
we
can
cat
player.
A
So
it's
just
a
two
record
with
different
information,
so
you
can
look
into
the
details
if
you're
interested
and
finally,
I
want
to
demo
you
on
this
one.
This
is
a
pie,
spark
shell.
So
it's
perfect
for
this.
You
know,
try
it
out,
so
we
can
refer.
We
can
refer
to
the
documentation
in
read
me
part.
So
I
give
your
more
details,
for
example,
this
one
this
slide,
will
you
call
a
fine
pass?
A
I
can
do
it
later
and
we-
and
we
can
see
here-
we
are
in
the
spark-
and
we
just
call
this
so
this
is
like
data
frame
equals
to
spark.v
dot
format.
The
format
will
be
the
nebula
graph,
connector
spark
connector
and
we
add
the
options
with
the
we
are
scanning,
the
vertex
on
space
named
basketball
player
and
the
tag
or
the
the
label
is
player,
and
we
want
return
name
and
age.
A
A
It
will
call
the
meta
and
then
storage
d
to
scan
the
data
for
us,
so
we're
actually
calling
the
nebula
spark
connector
in
python.
So
isn't
isn't
that
cool?
So
what
what?
What
else?
We
can
do?
So
we
can
see.
Oh,
let's
find
a
long
one.
So
this
one
we
are
calling
yep
we're,
calling
a
fine
path
from
this
node
to
another
over
everything
where
blah
blah
blah
and
we
want
to
lay
out
the
path.
So
we
can
see
all
the
paths
between
the
two
isn't
that
cool.
So
you
don't
have
to
do
anything.
A
You
just
get
one
liner,
you
have
everything
set
up,
and
apart
from
the
command
line
you
can
you
can
actually
call
them.
I
have
some,
you
know
web
services,
so
this
one
is
this.
One
is
the
dashboard.
Dashboard
is
a
project
you
want
to
do
the
opposition,
the
monitoring
work
towards
the
navel
graph,
so
this
is
all
included
in
the
nav
log.
You
just
give
one
one
go
and
everything
is
usable.
A
Okay,
don't
have
yeah,
we
have
those
metrics
overview,
yeah
everything
shown
here
so
yeah
and
you
can
actually
visit
this
domain
name
because
the
the
thanks
to
microsoft,
they
sponsor
sponsored
me.
So
I
can
have
this
this
thing
set
up
and
you
can
play
with
it
yourself
and
okay,
let's
check
the
other
one.
So
this
is
the
this
is
not
new.
Just
this
is
a
studio
and.
A
So
give
english
yeah
and
it's
it's
actually
or
already
included
in
v1,
but
I
want
to
demonstrate
to
you
two,
for
example:
player
one
hand
short,
yes
yeah,
so
you
can
play
with
that.
Isn't
that
cool
and
the
final
one?
Maybe
I
want
to
yeah.
A
Regarding
the
background
restore
we
set
up
a
main
io
for
you,
it's
object,
storage.
So
it's
actually
have
a
web
console
too.
So
I
think
the
password
is
the
default
one,
this
one
and
login
yeah.
So
we
can
see
yeah.
We
already
have
this
remember.
We
create
a
backup
here,
yeah
and
we
can
actually
call
this.
A
A
A
A
Yes,
so
if
we
look
into
the
files
here
and
we
comparing
to
this
one-
we
have
three.
For
now
we
have
four
of
different
docker
compo
compose
folders,
so
one
is
the
based
on
nebula.
Actually,
there
is
another
one:
it's
the
studio,
one
yeah.
Apart
from
that
there
are
four
of
them.
A
This
is
the
nebula
graph
one,
so
it's
actually
reusing
the
nebula
dark
darker
composed
project
and
I
just
added
actual
lines
to
make
the
network
external,
so
the
the
other
containers
can
access
as
the
in
same
kind
of
network.
So
this
there's
no
difference
no
big
difference.
A
So,
regarding
the
br
we
can
see,
backup
and
restore,
is
is
here
so
it's
another
dark
compose
and
these
are
related
config
files.
So
it's
here
so
the
br
can
call
the
agent
agent
will
call
the
services
and
the
services
it
will
make
a
rpc
call
to
do
the
corresponding
service
stops
or
read-only
and
fetched
all
the
ssd
files
and
the
store
it
pass
it
to
backhand
storage.
So
in
this
case,
I'm
setting
up
with
the
ming
io,
it's
f3
compatible,
oss,
and
also
I
just
I
have
already
shown
you.
A
The
spark
environment,
so
here
the
spark
I
set
up,
you
hadoop
and
the
spark
master,
and
I
put
three
applications
inside
this
spark
environment
and
when
you
run
that
I
will
call
the
cluster
for
you.
So
this
is
the
dashboard.
So
that's
what
inside
dashboard?
There
will
be
a
prompt
views,
the
exporters
and
the
dashboard
itself
so
itself,
including
the
statistics
files
and
the
http,
enable
hd
gateway,
yeah
and
that's
the
main
structure
architecture
of
it.
So
I
think
that's
more
of
everything
yeah.
A
A
Actually,
I
have
six
agents,
if
you're
interested
in
why
you
can
look
into
the
code
and
the
br
utility
and
a
ming
io
cluster,
which
is
has
the
backhand
storage
centralized,
and
I
also
introduced
a
bunch
of
cli
utilities
that
I
just
demoed
to
you.
You
can
do
everything
in
the
one
liner,
you
just
you,
don't
have
to
delve
into
the
details,
the
parameters,
the
dependencies.
You
just
call
this
one
liner,
you
will
have
everything
ready
and
this
will
be
extremely
for
you
to
be
easily
understand
everything
in
the
first
day.
A
So
this
is
the
architecture
I
just
explained
to
you
and
yeah
and
in
future
I
will
bring
more
things
to
it
like
I
will
bring
the
elastic
it's
the
integration
for
the
full
stack,
sorry
for
the
full
text,
search
and
the
the
nebula
bench.
So
you
can
run
your
ldbc
or
your
your
benchmark
towards
navigraph.
Although
it's
a
you
know,
it's
it's
not
everything.
A
All
in
one
is
not
optimal
for
the
performance,
but
you
can
run
it
and
I
mean
make
similar
things
to
my
other
set
project
called
nebular
in
based
on
kubernetes
in
in
docker.
So
you
can
have
another
one
liner,
but
you
will
have
kubernetes
in
container
and
a
cluster
on
top
of
that
enable
cluster
on
top
of
the
nebula
kubernetes
operator,
and
I
will
try
to
bring
all
those
involved
components
modes
in
our
documentation,
so
the
user
ideally
can
with
one
liner
if
they
have
a
name
server
with
internet.
A
You
can
actually
try
it
in
the
first
beginning,
without
struggling
with
the
details
in
first
go
and
finally,
I
may
try
to
bring
kafka.
The
fling
the
myc
core
and
some
other
integrations
either
is
related
to
spark
flink
or
something
else
so
stay
tuned
if
you're
interested
and
if
you,
if
you
want
to
bring
something
specifically
into
nabla
app
as
well
pay
me
in
slack
or
drop
me
a
mail
or
even
create
an
issue
in
nabla
repository
and
that's
all
of
this
add-on
topic.
A
A
Another
thing
I
want
to
say
is:
I
will
say
it
every
time
we
are
being
open
better
for
our
managed
level
graph
services
on
asia.
For
now,
but
the
news
in
this
week
is,
I
have
the
source
to
know.
We
will
launch
nabla
graph
service
on
alibaba
cloud,
maybe
in
upcoming
weeks
it's
actually
can
can
be
used
used.
Just
there
is
no
loading
page
there
landing
page
there.
Sorry.
A
So
another
news
is
that
one
of
our
contributors
sitting
in
in
the
u.s,
which
is
how
win
we'll
be
talking
about
nabograph
in
sigmod
2022.
A
So
this
is
the
website.
You
can
see
that
oh
yeah,
this
is
the
nabla
graph
in
azure
portal.
As
I
showed
every
time
you
just
search
graph
database,
you
will
see
map
number
graph
cloud
and
yeah.
This
is
the
sigmoid
program
web
page.
You
can
see
here
how,
when
this
is
our
contributor,
he
will
introduce
introduce
everyone,
the
design
and
architecture
and
the
back
background
stories
of
nebraska.
A
So
if
you
are
in
sick
mode,
be
sure
to
check
out
with
how
went
to
this
time
the
local
time
yeah
and
that's
that's
all
of
today.
Thank
you
very
much
so
see
you
in
two
weeks,
stay
tuned,
bye,.