►
From YouTube: NebulaGraph Deployment Options for Beginners
Description
In this video, Wey, the advocate, will introduce you different options to deploy NebulaGraph and guide you to choose the proper deployment way that suits you. Any questions about this, you can leave a message below or join our slack to discuss: https://nebula-graph.io/
A
A
B
I'm
sure
lisa,
okay,
we
will
start
so
we
will
start
with
this
decision
tree
and
before
then.
I
will
dive
into
the
results
on
how
many
options
we
already
have
on
the
nebula
community
that
you
can
use
to
deploy
a
network
cluster.
So
all
those
green
circles
are
actually
our
options
like
in
docker.
You
can
use
dock
compose
or
dock
swarm
in
kubernetes
you,
you
can
use
the
nebula
helm,
which
is
a
hem
chart
for
nebula,
and
we
also
have
a
nebula
operators
which
is
kubernetes
operators
to
spawn
you,
a
network
cluster.
B
You
can
build
from
the
source
code,
of
course,
and
after
both
the
debian
and
rpm
binary
packages
are
provided
by
nebula
community
and
there
is
a
nebulous
ansible
project
to
help
you
use
the
ansible
to
spawn
a
nebula
clusters.
So
how
do
we
choose
so
many
options
of
them?
B
So
we
can
refer
to
this
decision
tree
like
if
you're
running
them,
containerized
or
not
running
on
kubernetes
or
not.
You
want
to
use
the
crd
or
operator
pattern
or
not
like
you
prefer
to
use
a
banner
package.
Is
this
running
on
the
intel
architecture
or
not,
or
your
random
arm,
64
or
mips
machines?
B
B
If
you
prefer
use
the
hem
chart,
then
you
can
go
for
the
hem
chart,
but
it's
actually
not
recommended,
because
the
hem
chart
is
not
good
enough
for
maintaining
the
stateful
workloads
on
kubernetes
and
you
you
would
like
to
running
on
a
container
but
not
on
kubernetes.
You
can
still
use
docker
and
if
you're
only
running
them
on
a
single
server,
you
can
use
the
docker
compose
it's
the
best
fit
for
your
single
machine,
containerized
away,
and
it's
also
the
best
fit.
B
If
you
want
to
try
it
for
your
first
time
and
you
already
have
a
linux
machine,
yeah
nabla,
docker
swarm
is
a
project.
You
can
run
your
container
way
other
than
kubernetes
on
multiple
machine,
but
it's
actually
not
recommended.
But
still
we
have
this
option
and
it's
like
if
you're
running
your
workload
on
your
arm
based
machine
and
not
on
your
container
in
a
containerized
way,
you
can
build
it
from
the
source
code.
B
B
A
A
beginner
too
so
about
this.
I
have
a
question.
All
of
these
methods
have
the
same
hardware
specification
requirements
when
deploying
the
developer.
B
Yes,
it's
a
good
question
because
actually
there's
not
too
much
of
the
fingerprints
on
different,
you
know
types
of
workloads,
so
basically
your
hardware
requirements
only
is
is
only
really
really
to
your
purposes
or
your
business
logic
or
your
data
volumes.
But
one
thing
to
mention
is
that
actually,
for
example,
in
enable
operators
there
are
some
control
plan.
Workloads
introduced,
that's
not
too
much,
but
it
will
consume
you
some
actual
resources.
That's
one
thing.
B
Another
thing
is
that
if
you
are
running
only
for
your
test
purposes
on
your
small
data
volume,
you
can
even
bootstream
on
your
laptop
that
is
like
full
cpu
cores
or
even
vcpus
and
like
eight
or
four
gigabyte
rams.
B
But
when
it
comes
to
the
production
level,
runtime,
you
you
better
run
you
better
use
the
faster
storage
like
ssd
or
nvme
ssd
for
your
volume,
yeah,
okay,.
A
Got
it
so
you
mean
only
ssd
is
supporting.
B
But
if
you
only
want
to
test
it
to
try
some
basic
queries
to
to
play
with
nabla
graph,
you
can
run
it
on
your
very
weak
machines,
like
your
laptop
or
even
very
old
servers
on
your
lap.
That
is
only
provide
the
hdd
disk.
You
can
boot
up
the
network
and
you
can
run
the
queries,
but
please
expect
some
unexpected
behaviors,
but
please
note
that
the
nebula
community
has
started
some
work
on
making
the
hsd
behaviors
more
stable,
but
it's
never
recommended
to
use
hdd
yeah.
A
B
B
One
thing
to
mention
is
that
it's
a
good
news
that
last
month
in
our
daily
build
in
our
master,
which
is
after
the
2.6.1
after
that
we
have
a
nicely
built
for
the
arm
based
linux
image.
So
in
fact
now
I
can
run
nebula
on
top
of
the
on
my
and
one
macbook
on
top
of
the
container,
on
top
of
the
docker
desktop
with
the
arm-based
docker
image,
yeah.
A
Okay,
okay,
quite
clear
for
me
and,
as
I
know,
many
users
will
want
to
use
the
nanograph
in
the
production
environment
and
many
people
of
them
asked
me
about
the
deployment
of
the
three
workloads
like
graffiti
melody
storage
team
in
a
production
environment.
So
above
them
is
there
anything
special
to
attention
to.
B
Actually,
if
you're
familiar
with
the
architecture
of
naval
graph,
you
will
know
the
answer
correctly,
that
you
will
know
the
answer.
That
for
grafti
is
stateless,
so
you
can
have
any
of
them
and
for
three
and
for
storage
the,
which
is
the
storage
backhand
instance.
B
At
least
you
will
have
three
of
them
if
your
deployment
in
a
distributed
way,
but
you
can
scale
out
to
any
scale
of
in
his
instances
of
stories,
d
and
for
metadata,
is
recommended
to
have
three
of
them:
yeah,
lisa,
okay,.
A
B
How
do
they
differ
a
good
question?
Lisa
there
is,
I
think
the
answer
is.
It
depends
the
general
argument
between
the
containerized
or
not
or
even
kubernetes
are
not
applies
to
nebula.
Here,
I'm
not
going
to
talk
about
that,
but
it
did.
It
generally
depends
on
your
infer
or
ops
teams.
B
For
example,
if
your
ops
team
prefers
kubernetes
and
like
all
of
your
observation
info
logging
tracking
monitoring,
infra
kubernetes
base,
then
it's
better
for
you
to
run
among
kubernetes,
like
you
are
using
nav
operators,
but,
for
example,
if
in
your
infra
of
kubernetes
the
the
persistent
volume
class
the
pvc
provided
is
only
it's
not
fast
enough.
Like
you,
don't
you
only
have
a
slowness
or
hdd-based
backhand
storage.
B
In
that
case,
you
will
have
to
run
nebula
in
a
non-kubernetes
way,
because
in
that
case
you
can
leverage
your
fast
on
your
bare
mental
operating
systems,
but
it
basically
depends
on
your
situation
and
there
is
no
big
change,
a
big
difference
and
basically
they
are
the
same
very
same
core
battery
in
both
the
container
images
and
your
binary,
packets
or
the
source
code
that
you
would
like
to
build.