►
Description
🚩 Project Heartbeat
- #3993 Something error when the version of tag/edge schema is greater than 256
- #3946, #4060 UDF progress
- #4063 Query optimization summary based on cache experiment
- #3771 persist learner info
- #4030 refactor: use STL facilities instead of 3rd party libraries(Jun)
- Feature:
- #3989 clear space
- Others
- #3905 console in binary packages
- Nebula .net, Nebula jdbc with 3.0.0 supported
Ad-hoc topic
How I use Knowledge Graph(with Nebula Graph) to help solve Chinese Wordle
A
Hello
posh,
can
you
hear
me?
Oh
yes,
hi
welcome
you.
You
are
the
very
first
member
joining
our
community
meeting
in
history.
B
Yeah,
it's
the
first
time,
I'm
coming
to
this
kind
of
meetings
so
and
I've
watched
couple
of
things
on
youtube:
the
your
last
couple
of
videos.
So
I
thought
I
would
join
and
learn
a
few
stuff
about
the
nebula
graph.
A
Thank
you
so
much
and
you're
most
welcome.
So
I
think
there
will
be
no
other
guys
joining
us
today.
So
we
can't
we
can
start
and
you
can
stop
me
anytime.
You
would
like
to
question
more
okay,
yeah
and
if
you
feel
free,
you
can
turn
on
your
your
camera
as
well,
yeah,
okay,
so
so
we
will
first
for
the
first
time
we
have
new
members
hi.
A
So
can
you
introduce
us
yourself
to
us.
B
A
B
Now
we
are
trying
to
get
some
our
hands
on
with
the
nebula
graph.
We
have
been
playing
with
2.6.1
and
we
had
some
challenges
in
getting
the
system
up
and
running
on
kubernetes
cluster.
B
A
Yeah
welcome
so
we
we
can.
We
can
check
your
issues
one
by
one
later
welcome
again,
thank
you
thanks,
yeah
yeah
and
in
the
future,
for
any
help,
just
ping
us
in
in
slack
or
forum
or
github
yeah,.
B
A
B
A
Sure
sure,
okay,
thank
you
so
yeah
we
will
have
this
schedule
bi-weekly
and
I'm
not
going
to
go
through
all
the
details.
So.
A
We
just
have
a
minor
version
released
in
last
two
weeks
with
with
the
like
full
hotfix
and
the
the
the
pr
issues
that
may
be
worth
worthy
to
be
check
out.
Are
these
and
the
the
first
one?
Is
that
there's
a
corner
case
that
if
we
alter
our
schema
in
more
than
456
versions,
there
will
be
a
corner
case
issue
and,
as
I
recall,
this
issue
is
already
fixed
by
one
of
our
community
contributors.
A
Another
thing
is
last
year
in
our
nebula
hack
song,
one
of
the
team
contribute
the
the
udf
implementation
with
the
wasp,
and
this
topic
is,
and
they
even
won
the
first,
the
first
place
of
the
game,
and
now
we
recently
this
topic
was
discussed
actively.
Here
you
can
check
corresponding
issues
and
the
next
one
is
one
of
our
contributor.
A
When
how
who
is
also
in
in
us,
he
was
was
doing
some
experiments
regarding
the
performance
on
the
cache
perspective
on
different
layers,
and
he
observed
that
the
cache
was
missed
in
in
case
the
certain
query.
Pattern
involves
the
empty.
I
don't
recall
that
the
empty
tag
fields
query,
so
he
make
a
brilliant
summary
on
where
we
can
optimize
in
the
road
rule-based
optimization.
A
Also,
he
was
doing
something
around
the
cash
improvement,
and
hopefully
I
can
invite
him
to
give
us
a
topic
in
upcoming
communities,
and
this
one
is
around
some
underlying
raft
in
implementation
that
we
tr
we're
trying
to
address
something
around
the
the
data
balance,
and
this
is
the
pr
that's
improving
this
issue
and
also
a
community
contributor
comes
out
to
help
to
point
that
we're
not
not
using
the
stl
facilities
in
in
the
best
practice,
so
he
he
contributed
a
really
huge
pr
on
replace
them
when,
when
needed
to
not
using
a
third
party
libraries.
A
Another
thing
that
recently
in
master,
we
have
a
feature
named
clear
space
and
that
is
to
a
new
query:
a
new
new
ngo
l
command
to
clear
up
everything,
but
the
empty
schema
of
certain
spaces.
So
this
is
very
handy
in
in
certain
cases,
especially
when
you
are
doing
some
staging
tests,
so
it
will
be
included
in
next
release.
A
Another
thing
is
that
we,
finally
we
listened
that
that
previously
we
we
make
the
console
in
in
c
plus
plus
and
this
by
natural
included
in
our
binary
packages
in
version
one
when
it
comes
to
version
two,
we
switched
to
go
on
the
coast
on
the
console
part,
so
we
we
didn't
think
about
it.
We
just
separate
them
into
different
packages,
so
this
is
not
friendly
for
the
fresh
users,
so
they
have
to
dealing
install
another
package
to
connect
to
the
nebula
server.
A
On
the
other
side,
our
library
sdk
on
the
dotnet
and
the
jdbc,
they
were
lifted
to
the
3.0
version
by
the
community
contributors,
and
I
I
can
see
that
the
php
is
is
ongoing,
and
hopefully
we
can
have
the
php
support
in
version
3
in
in
next
meeting
before
that's
more
about
heartbeats,
and
the
topic
I
want
to
bring
is
will
be
quite
small
that
I
want
to
share
something
I
did
last
two
weeks,
I
I
I
I
would
trying
to
create
a
knowledge
graph
to
solve
the
the
chinese
version
of
word.
A
Okay,
so
so
we
we
all
know
that
wordle
is
a
is
a
quite
good
word
puzzle
games
that
you
can
guess
simple
words
that
in
a
very
interesting
feedback
loop,
you
can
try
first
guess,
and
they
will
give
you
some
more
hints
and
you
can
search
some
things
from
your
browser,
your
your
brain
or
or
from
google,
and
you
can
share
your
screen.
Your
results
in
the
twitter
or
facebook
which
make
it
go
viral
a
couple
months
before,
but
for
the
chinese
guys
we
can.
A
Can
we
have
this
kind
of
fun?
So
there
are
a
bunch
of
chinese
version
of
word,
though,
but
for
the
reason
chinese
language
is
not
alphabet
based.
So
it's
not
easy
for
us
to
actually
enjoy
the
fun
in
a
similar
form.
I
was
actually
composing
article.
A
I
will
post
it
maybe
in
later
this
month
that
some
someone
was
talking
about
this,
like
if
you
are
making
a
chinese
version
of
word,
or
it
will
be
something
like
this.
You
have
thousands
of
characters
to
be
filled
as
candidates.
So
it's
a
disaster,
it's
not
possible,
so
someone
leveraging
other
dimensions
of
the
language
which
is
the
pronunciation
of
the
words
of
the
characters.
A
So
then,
here
you
go.
You
have
this
chinese
version
named
the
named
the
network
is
not
stable.
I
hope.
Can
you
hear
me
smoothly
or
yes?
Yes,
I
can
hear
you
okay,
thank
you.
So
the
chinese
version
handle
is
something
like
this.
You
can.
A
We
are
leveraging
the
chinese
idiom,
which
is
also
in
form
of
full
character,
but
every,
but
there
are
still
thousands
possibilities
when
you're
filling
the
chinese
characters.
So
the
answer
of
the
author
of
this
chinese
handle
is
he
leveraged
the
pronunciation,
for
example.
Here,
for
example,
if
you
give
a
a
guess,
not
not
only
the
the
character
itself
will
be
colored
as
a
hint,
the
pronunciation
part
with
tones.
A
So
there
are
multiple
dimensioning
of
the
can
the
filter
conditions
so-
and
this
is
basically
playable,
but
still
this
game
is
still
a
little
bit
hard
as
a
native
chinese
speaker.
So
then
I
was
thinking
how
I
can
help
here.
A
So
actually,
the
process
that
we
are
enjoying
in
front
of
playing
world
is,
we
are
giving
some
gas
and
we
are
checking
the
knowledge
in
our
brain,
but
even
with
with
some
vibrant
or
optimization
of
the
design
of
the
chinese
version
is
still
too
hard
for
us
to
actually
enjoy
that.
A
So
I'm
I'm
trying
to
create
a
knowledge
graph,
including
the
information
of
the
chinese
words,
chinese
characters
and
pronunciations
so,
and
that
will
be
something.
So
this
is
a
article
in
chinese.
I
will
compose
an
english
version
in
this
week,
something
like
this.
A
So
I
make
this
graph.
So
this
is
the
vertex
standing
for
the
the
items
and
it
will
have
a
relationship
to
have
one
characters
and
will
this
is
the
one
dimension?
That's
mapped
to
the
english
version
of
other
alphabet
version
of
wordle,
but
but
you
can
on
the
other
dimension,
you
have
to
create
another
dimension
of
knowledge,
which
is
the
pronunciation
and
the
pronunciation
tones.
So
with
with
this
graph
being
created,
I
did
some
work
and
I
shared
the
the
graph
building
process
in
this
repository.
A
We
can
finally
make
this
leverage
this
graph.
So
I'm
finding
a
balance
point
of
this.
I
didn't
make
it
automated
to
ruin
the
game,
the
fun
of
enjoying
the
game
process,
but
instead
you
are
leveraging
leveraging
this
knowledge
graph
of
your
your
addition,
extension
of
your
brain,
so
the
bonus
the
output
of
this
this
graph
is,
you
can
play
the
the
chinese
world
challenge
every
day
and
now
the
the
process
is
become.
You
are
making
the
the
match.
A
Queries
as
a
you
know,
antiquity
exercise
when
you
are
playing
the
game
which
I
consider
reasonable,
but
maybe
it's
too
nerdy
for
most
of
guys.
But
but
you
know
we
are
in
the
community
of
naval
graf,
so
I
think
this
some
it's
kind
of
worthy
to
to
do
it.
So
the
outcome
is,
I
can
really
make
leveraging
the
the
multi-patch
support
in
version
3,
I
can
translate
all
the
hints.
A
So
this
is
the
the
topic
of
my
part
today
and
I
think
we
can
come
back
to
the
sync
discussion
part.
Finally,
thank
you.
Thank
you
for
the
demo,
that's
cool!
Actually,
thank
you.
Thank
you
so
much
so
maybe
I
can
go
through
your
questions,
so
I
think
you
can
share
your
screen
if
you
are
so.
B
Yeah
sure
so,
basically,
I
wanted
to
get
the
essence
of
the
nebula
graph
as
such,
like,
for
example,
how
much
of
the
data
who
are
all
the
companies
using
the
big
companies
and
what
was
the
maximum
notes
and
edges
that
got
ingested
into
the
nebula.
What
can
you
give
us?
The
some
statistics.
A
Oh
sure,
actually
we
can
check
on
the
users.
There
are
actually
a
bunch
of
teams
using
navel
graph.
That
involves
a
huge
volume
of
data.
A
When,
when
we
are
considering
the
data
volume,
I
can
see
that
mate.
1
is
a
chinese
company.
That
is
like
a
chinese
version
of
yelp
but
much
larger
because
they
are,
they
are
making
a
super
app
that
in
this
app
you
can
order
food
like
uber
eats,
and
you
can
have
the
the
rating
of
the
restaurants
or
museums
like
yelp,
and
also
you
can
even
you
know.
A
You
can
also,
as
I
recall,
book
taxi
from
this
app,
so
they
have
a
bunch
of
users
and
their
share.
They
shared
some
articles
in
their
own
blog.
This
is
the
translated
version
and
this,
for
example,
tencent
wechat
this
this
comes
from
same
company
and
the
wechat
is
the
chinese
version
of
whatsapp,
so
they
are
leveraging
navigate
on
their
social
media
risk
and
the
connections
between
the
users.
A
It's
it's
also
a
very
large
data
volume.
As
I
recall
this,
the
quite
in
english
is
named
the
kwei.
It's
it's
it's
something
like
tick-tock,
but
smaller
than
tick
tock.
They
are
also
using
it
and
if
I
recall
in
there
sharing
that
the
incriminal,
then
even.
B
Done,
let's
say,
for
example,
billions
I
I
I
saw
some
references
to
alibaba.
B
A
I
may
need
to
double
check,
but
it's
it's
supported
and
I
don't
recall
which,
which
of
the
user
is
leveraging
the
largest
volume,
but,
as
I
recall
quite
their
daily
increment
data
volume
is,
is
a
hundred
of
million
level
so
their
their
their
summary.
Their
data
in
all
I
I,
as
I
recall
they
should
also
react
or
they
reach
the
billions
of
level.
As
I
recall
yeah,
I
see
so.
A
B
Some
of
the
things
that
you
support
is
through
importer,
which
is
a
csv
based.
A
B
And
other
is
a
spark
connector
which
one
do
you.
These
people
are
using,
is
it
importer
or
is
it
or
some
other
thing
yeah?
I
talked
to
him
quite
a
bit
actually
over
the.
B
Github,
so
I
I
I
he
helped
me
a
lot
of
situations.
B
A
Nebula
landscape,
but
I
can
first
answer
your
question
on
who
are
using,
which,
as
I
recall
wechat
is
using
the
exchange.
The
exchange
is
a
spark
application.
A
It's
it's
a
client
to
actually
write
that
data
consume
data
from
other
sources
and
the
right
data
to
navigate,
and
I'm
not
sure
if
you,
you
have
some
context
on
this
tooling
already,
but
as
I
recall
they
have,
they
have
be
oh
yeah,
which
had
has
a
billion
of
data
in
the
billion
level
data
and
so
and
they
they
want.
They
want
to
import
a
billion
levels.
Data
in
t1,
t
plus
one
every
day,
so
they
are
using
exchange
to
for
exchange.
A
A
A
So
this
is
decoupling
offload
the
data
when
you
are
you
are,
you
are
inserting
data
to
nebula
the
enabler,
the
storage
d
will
sort
in
your
data
and
it
will
consume
the
the
capability
of
the
graph,
a
novel
graph
cluster,
but
with
nebula
exchange.
If
you
are
selecting
to
output
the
file
as
sst
ssd
file,
and
then
you
inject
the
xd
file
directly
to
the
cluster,
you
can
offload
this
sorting
this
computation
phase.
So
that's.
B
You
you
are
saying,
instead
of
importer,
nebula
importer
or
spark
streaming
you're
suggesting
to
use
exchange
nebula
exchange,
and
that
is
that.
Is
that
what
I'm
understanding.
A
Yeah,
that's
the
extreme
large
data
volume
case,
but.
A
B
A
Oh
in,
in
that
case,
you
may
a
couple
hundred
of
billions
of
of
data.
In
that
case
the
importer,
maybe
not
your
best
choice
actually
for
for
other
users,
not
all
of
them
are
are
directly
using
our
tools,
but
some
of
them
are
leveraging
the
like,
sparkline
or
java
client,
or
go
client
in
in
their
applications,
and
some
of
some
of
them
are
leveraging
the
streaming
infra
like
kafka
or
pauser.
A
They
are
do
doing
them
together
offline.
They
are
connecting
things
through
the
the
streaming
infra
and
with
the
exchange
it
can
be
connected
to
the
those
kafka
offline
and
that's
one
one
solution.
Also
the
problem
of
importer
is,
you
can
only
leveraging
the
client
source
from
one
server
right,
you're
only
running
this
binary
from
one
server.
So
in
case,
that
is
your
bottom.
A
And
you
we
have
multi
server
in
server
set,
but
in
client
set.
If
you
are,
if
your
data
volume
is
so
large
that
just
one
server
as
client
is
not
sufficient
for
you,
so
then
we
encourage
you
to
use
exchange
exchange.
Can
leverage
user
the
the
client
power
resource
more
than
one
server
so
like
even
your
data
source
is
csv.
B
Options.
Yeah
sorry,
so
I
looked
at
the
exchange.
Also,
but
thing
is
there
isn't
any
much
documentation
how
to
use
that
or
you
know
any
bunch
of
the
the
documentation
is.
Maybe
it
needs
some
improvement.
A
B
So
so
I
don't
know
how
to
use
that.
I
rather
I
would
use
that,
for
example,
when
I
used
some
of
the
d
graph,
they
have
something
called
bulk
uploader.
B
What
all
it
does
is
takes
the
data
and
writes
to
sst
files,
and
then
you
copy
this
ssd
file
over
to
the
other
cluster,
where
your
service
cluster
and
just
start
the
cluster,
and
it
all
works
fine.
So
I'm
sure
there
are
some
similarities
in
the
technologies
here
between
d
graph
and
nebula.
The
thing
is,
the
exchange
feature
is
not
fully
documented
or
need
some
some
more
help.
How
to
use
that.
A
Yeah
yeah,
actually,
I'm
not
sure
if
you
check
for
all
the
in
import
options,
there
is.
B
And
also
one
more
thing
is:
this
is
3.01
3.01,
I'm
not
sure
if
kubernetes
installation
is
supported.
Actually,
I
I
have
not
seen
any
hell
chart
files
or
anything
like.
A
A
I'm
sorry
for
this
and
before
before,
I'm
I'm
going
to
give
you
some
more
information
on
the
kubernetes
part,
but
for
the
exchange.
I
want
to
quickly
give
you
an
introduction,
but
we
for
sure
we
will
improve
the
documentation.
I'm
sorry
that
it
confused
you,
but
in
in
one
word
very
quick.
That
exchange
is
logically
just
like
the
importer.
A
So
you
have
you
you,
you
describe
how
the
data
is
mapped
in
the
in
one
configuration
file
and
one
blocker
that
may
fall
for
the
fresh
users
is
that
you
have
to
use
the
spark.
So
after
you
have
a
spark,
you
can
either
use
the
spark
spark
submit
to
directly
call
this
jar,
which
is
the
java.
You
know
byte
pack,
binary
package.
You
can
call
this
directly
and
your
configuration
is
explicitly
classified
here.
So,
okay,
yeah
and
the
the
logic
is
is
equivalent
to
it's
in
the
importer.
A
Actually,
so
just
you
need
a
spark
infra
if
you
want
to.
Actually,
if
you
you
want
to
try
it
on
the
kubernetes,
I
I
actually
have
a
blog
post
surrounding
it.
There
are
some
small
blockers,
but
it's
not
translated
into
english.
I
would
do
it
in
the
upcoming
weeks
for
you
and
another
thing
is
yeah
and
that
that's
that's
all
about
exchange
part.
Another
thing
is
the
kubernetes
yeah.
A
I
just
checked
with
the
guy
offline,
mainly
working
on
our
operator
and
he's
saying
that
they
are
testing
offline
testing,
this
pr
to
support
3.0.1
on
our
feature
actively
this
week,
and
hopefully
it
will
be
merged
by
the
end
of
this
this
this
month,
hopefully,
oh
end
of
this
month,.
B
So
if
you
want
me
to
try
out
early
before
it
is
released,
I
am
happy
to
test
out
3.01
on
my
end
during
setup
and
do
that.
So
please
send
me
an
early
thing,
so
I
can
use
that
and
one
more
thing
you
said
regarding
the
exchange
spark
cluster.
So
if
I
have
a
spot
cluster,
why
would
I
use
exchange?
Rather,
I
couldn't
use
streaming
right.
A
A
Yeah
and
you
from
documentation
you
can
see,
there
are
like
10
of
different
sources
already
supported
by
exchange
act
of
blocks
like
yeah,
neo4j,
etc.
Yeah.
B
I
see
so
you
said
you
have
some
documentation
on
this
one.
So
please
send
me
probably
I
can
use
google
translator
to
translate
to
english
and
try.
A
B
To
try
to
use
it
some
of
the
things
yeah.
Basically,
we
wanted
to
try
this
out
and
because
I
I
I
tried
heavily,
do
you
know
d
graph,
something
called
d
graph.
A
Sure
I
I
know
they
they
are,
they
are
awesome
product
projects
and
they
are
geniusly
leveraging
the
graphql
dsl.
So
yeah,
it's
a
good
great
project
and
I
know
some
of
our
users
migrate
from
the
graph.
B
Yeah
yeah
I
see,
and
so
that
is
so
I
want
to
try
out
even
the
nebula
before
I
know,
selecting
one
the
hour
or
the
other.
So
that's.
B
Why
I
wanted
to
try
out
nebula
as
well
before
deciding
on
it?
That's
number
one:
now
you
talk
about
the
studio
and
can
you,
if
you
have
a
cluster,
can
you
bring
up
your
studio.
A
Yep,
actually,
apart
from
my
own
studio,
you
can
I'm
not
sure
if
you
know
it,
there
is
a
playground
out
there.
B
A
Online
demo
here
and
then
you
can
play
them
without
any
authentication
or
credential
required
yeah
yeah
yeah.
So
this
I
already
did
it.
I
have.
A
B
I
want
to
run
some
algorithms
graph
algorithms,
if
you
can
that's
what
I'm
more
interested
in
it
this
I
already
have
it
and
it's
up
and
running
so
that's
all
cool.
I
wanted
to
run
some
graph
algorithms
on
your
nebula
algorithms
or
whatever.
A
And
actually
nabla
graph.
For
now
we
don't.
We
didn't
implement
much
algorithm
on
our
computer,
our
in
in
our
cluster,
so
instead
we
are
leveraging
the
spark
so
and
from
studio
it's
not
possible
to
trigger
the
corresponding
algorithm
for
now.
A
Actually
one
thing
I
want
to
correct
is
previously:
I
was.
I
mentioned
this
article
in
my
blog,
so
the
the
example
I
I
was
making
here
is
it's
not
exchange,
but
it's
the
algorithm
so
nebula,
I'm
not
sure
if
you
already
know
nebula
algorithm.
A
So
this
is
the
article
that
I
want
to
translate
to
english,
but
you
can
you
can
check
this
one
yeah
so
for
now
we
we
don't.
We
don't
provide
a
way
to
call
the
algorithm
from
the
studio,
I'm
not
sure
for
future
either.
But
you
know
we
have
an
enterprise
version
and
in
the
enterprise
version
there
is
a
tool
called
nebula
explorer
and
in
future
we
will
implement
this
algorithm,
invoking
in
the
gui
from
the
nebula
explorer
in
the
future,
but
from
studio,
I'm
not
sure
for
now.
Okay,.
B
So
that's
fair
enough,
so
I
wanted
to
ask
this
algorithms.
The
way
it
is
implemented
to
you
said
that
it
is
runs
on
spot
cluster.
So
when
you
said
that,
are
you
saying
all
this
algorithm?
You
extract
fetch
the
data
from
graph
database
and
then
run
these
algorithms
on
the
dataset
in
spark.
Is
that
what
you
do.
A
Exactly
actually,
this
algorithm
is
nothing
but
a
spark
based
application
that
can
consume
data
either
from
nabla
graph
directly
or
even
from
other
sources
like
csv
files,
and
they
just
load
the
needed
graph
in
the
in
the
memory
and
make
the
computation
and
the
iteration.
And
then
you
can
output
the
file
either
to
another
csv
file
or
right
back
to
the
nablograph
cluster.
I
see
so
that
means
you're
confirming.
A
Yes,
actually,
yes,
actually
nabla
graph
was
designed
from
day
one
to
handling.
You
know
the
huge
data
volume,
so
this
is
that's
actually
possible
and
doable
to
do.
You
know
the
the
the
the
graph
algorithm
instead
of
a
graph
d
is
possible,
but
we
didn't
implement
it
because
the
the
data
volume
that
we
want
to
handle
is
so
large
that
we
we
don't
consider,
is
the
best
way,
but
maybe
in
the
future,
for
those
small
set
of
graph
algorithm.
A
B
A
In
our
in
current
road
roadmap
now
yeah,
I
see.
B
Sure,
okay,
thanks
and
one
more
thing
whether
it
is
nebula
exchange
or
nebula
importer,
especially
let's
focus
on
importer,
because
currently
I'm
using
that,
so
you
said
it-
it
has
to
be
only
one
instance
of
nebula
importer
running.
Why?
Why
is
that?
Why
did
you
say
that
no,
I
I.
A
I
just
say
that
if
you
want
to,
if
your
your
job
is
to
insert
one
data,
that's
it's
so
huge,
it's
better
to
leverage
in
the
exchange,
so
it
can
make
the
parallel
thing
from
a
cluster
rather
than
one
server.
But
I'm
not
saying
that
you
have
to
use
one
instance
of
x
importer,
one
for
one
time
you
can
do
it.
You
can
trigger
different
binaries
of
importer.
In
parallel
yeah.
I
see
I
see
yeah
yeah.
I
see.
B
So
how
does
it
work
if,
let's
say,
for
example,
I
have
a
azure
file
system
where
my
data
csv
data
is
stored
and
I
spin
off
bunch
of
importers
each
one
importing
a
a
vertex
one,
one
vertex
type,
each
vertex
type
csv
file
is,
let's
say,
10
gb
and
each
one
is
importing
one
vertex
type
and
let's
say
I
have
ten
vertexes
and
I
I
spin
off
ten
nebula
importers
and
and
can
the
load
can
the
cluster
take
that
load?
B
A
You
you
have
to
tune
yourself,
but
actually,
if
your
data
is
placed
in
one
server-
and
they
just
belongs
to
different-
you
know
different
schemas,
you
can
still
leverage
only
one
one
binary
and
it
it
can
leverage
the
concurrency
for
you.
Actually,
the
difference
between
you
know
I
mentioned
the
exchange
is
that
if
one
server
computation
resource
is
not
enough,
you
have
to
use
the
exchange
to
have
more,
but
even
they
are.
They
belong
to
different
vertex
or
different
edge
types.
A
You
can
still
use
only
one
instance
if
it's
not
the
bottom
neck
for
ic
yeah,
but
you.
B
B
I
see
okay,
yeah,
okay,
yeah
and
let
me
try
that
out
exchange
part
and
and
see
how
far
I
go
and
and
if
I
have
any
issues
probably
I
will
reach
up
to
you
or
yeah
at
the
forums.
Yes
and
in
time.
A
Okay,
thank
you
so
much
welcome.
Thank
you,
bye-bye
okay.
So
we
finished
our
very
first
open
floors,
sync
discussion
and
I
will
call
this
the
end
so
see
you
next
two
weeks,
bye.