►
From YouTube: NebulaGraph Data Import Options for Beginners
Description
In this video, you will have a deeper understanding about the various data import options of NebulaGraph. In addition, the speaker, Wey, will introduce how to import the data in a detailed way so that you can choose the proper data import option that suits you.
Any questions about this, please leave your comment below.
For more information, you can visit our official website: https://nebula-graph.io/
A
We
are
very
happy
to
share
with
you
some
appearance
about
the
nabla
graph.
Today
we
want
to
share
with
you
the
important
options
about
the
navel
graph.
So
let's
wait
introduce
the
different
data,
important
methods
and
the
tools
provided
by
level
graph
highway.
Can
you
start
now.
B
B
So,
first
of
all,
I
will
introduce
capability
all
of
the
tooling
that
network
community
has
provided
to
you
for
data
import,
so
there
are
actually
three
of
them
measured
in
major.
So
basically
there
are
three
of
them.
The
first
one
is
called
nebula
importer,
so
it's
coated
in
gold
and
it's
a
serverless
that
is
a
it's
a
headless
binary
package
that
a
single
file
you
can
call
it
from
any
linux
servers
to
enable
the
import
of
data
from
a
csv
file
to
another
graph.
So
the
next
one
is
nebula
exchange.
B
So
nabla
exchange
is
a
spark
application
that
enables
you
to
transform
different
data
sources
into
network
graph,
including
some
server-based,
some
database
based
or
based
or
stream
based.
The
last
one
is
the
nebulous
link.
Connector
netflix
connector
literally,
is
a
connector
running
on
the
flink
that
you
can
consume
data
from
the
stream
data
link
and
be
weird
and
written
to
navigate
graph.
So
how
do
we
choose
from
them?
So
it's
basic
as
as
euro
will
we'll
go
through
the
decision
tree,
so
we
can
from
bottom.
B
B
So
if
you
already
have
you
already
have
spark
environment,
you're
free
to
go
with
exchange,
and
if
you
only
want
to
leverage
a
single
file
from
every
linked
server,
you
can
use
the
importer,
and
another
difference
is
that
in
this,
in
the
exchange
that
is
you're
running
the
importing
workload
in
spark,
you
can
now
reach
more
than
one
server,
but
in
importer
you
can
only
run
the
importer
on
one
server
and
for
all
other
file
formats
that
we
supported
is
only
can
be
only
done
from
the
network
exchange
and
the
other
side
is,
if
you
are
importing
data
from
like
neo4j,
mysql
click,
house
and
etc.
B
You
have
to
use
exchange
and
exchange
is
quite
powerful,
tooling
to
connecting
different
kind
of
databases
or
even
stream
data.
So,
regarding
stream
data,
if
you
are,
you
are
importing
data
from
the
pop
stop
like
kafka
or
browser,
you
will
use
network
exchange.
If
you
are
importing
data
from
flink,
you
will
use
the
flink
connector.
So
that's
basically
all
options
that
we
can
choose
from
lisa.
A
A
A
If
I
understand
the
error
correctly
for
streaming
data,
if
it's
from
blink
another
link,
connector
should
be
used
and
if
the
data
is
from
kaka
or
poster
network
exchange
can
be
used.
So
I
understanding
right.
Yes,
okay,
okay,
got
it
so
now.
B
You
mean
the
clients
sdks
for
different
languages
like
python
java.
Actually,
the
difference
is
by
nature
those
clients.
If
your
application
running
on
python
java
or
go,
they
are
directly
talking
to
nebula,
so
you
will
use
the
corresponding
sd
sdk
alkaline,
and
if
your
application
is
is
running
on
on
spark,
then
you
will
use
the
spark
line,
which
is
called
the
spark.
B
Connector
and
same
thing
applies
to
the
flink
case,
but
if
you
would
like
to
import
or
streaming
data
from
the
sources
like
the
files,
the
other
databases
or
even
kafka
browser,
in
that
case,
you
will
have
to
use
the
exchange.
It's
a
exchange
application
rather
than
a
library.
So
does
it
make
sense
to
you
yeah.
B
Good
question
lisa
so
yeah
they
are
all
applications.
They
are
all
called
running
on
spark.
So
what's
different,
as
I
mentioned
before,
actually
spark
a
spark
connector
for
nebula
is
actually
a
client
or
library
that
you
want
to
talking
to
nebula
in
your
spark
based
workload
or
code,
but
exchange
itself
is
actually
a
spark
application,
so
you
can
call
the
exchange
from
the
the
submit
shell
or
from
the
exchange
or
from
the
spark
code.
So
that's
the
similar
difference
we
talked
about
in
our
last
questions.
B
A
B
Yes,
and
that's
actually,
this
part
was
demonstrated,
so
I
will
dive
into
the
exchange
itself.
So
as
I
list
it,
but
it's
not
all
of
the
least
that
never
exchange
supported
like
a
mysql
click
house,
also
support
different
kind
of
files
or
also
support
the
stream.
The
data,
so
underlying
exchanges
are
divided
into
three
components:
that
is
the
reader,
processor
and
writer.
B
So
so
reader
literally
means
that
the
part
that
you
are
talking
to
those
data
sources
and
processor
is
the
ones
that,
of
course,
to
be
used
to
making
the
process
processing
based
on
how
you
want
to
output
or
how
you
want
to
write
them
so
and,
as
this
slide
shows,
when
you're
you're
talking
with
data
source
of
the
different
databases,
you're
actually
relying
on
the
server-based
reader
and
when
you're
talking
to
the
file-based
sources,
you
actually
underline
user
file
based
reader
and
of
course
you
are
using
the
streaming
based
reader.
B
When
you
are
talking
to
kafka
in
the
browser
and
inside
exchange,
the
reader
will
read
the
source
and
the
process
them
with
the
different
type
of
preciser.
But
the
difference
here
is
that
if
you
choose
to
output
your
data
into
the
nebula
cluster
directly,
you
are
actually
elaborating
the
server-based
writer,
which
is
using
underlying
the
vertex
and
edge
processor.
B
But
there
is
another
option
which
is
called
the
sst
writer.
That
is,
you
can
optionally
choose
to
make
exchange
as
your
application
to
directly
creating
the
sst
files,
and
you
can
sideload
those
ssd
files
into
another
cluster
directly
without
manipul?
Talking
with
the
graftee,
so
in
that
case
you
can
achieve
a
huge
data,
import
writing
rate,
so
not
everyone
needed,
but
never
provided
by
default.
In
that
case,
you
will
use
the
file-based
writer
which
will
consume
the
sst
processor
underlying.
So
I
hope
that
will
answer
your
question.
Lisa
yeah.
A
Yeah,
actually
I
can
understand
it
all
now
and
it's
quite
an
amazing
explanation
for
me.
So
today's
discussion
about
imported
data
options
is
over
now,
so
we
hope
you
can
learn
about
other
tools
or
options
for
something
else,
and
if
you
have
any
questions
about
it,
just
leave
us
a
message
or
comment
below
the
video
so
see
you
next
time.
Bye.