►
Description
Details refer to release note: https://docs.nebula-graph.io/3.1.0/20.appendix/release-note/
A
A
So
maybe
we
can
okay,
so
we
will
start
so
may
the
force
be
with
you.
Welcome
to
another
graph
community
meeting,
let's
get
started
so
today
as
euro.
This
is
our
agenda,
but
one
different
thing
is
that
our
hot
topic
in
in
this
meeting
will
be
the
overview
of
the
release
3.1.0,
which
was
released
weeks
before-
and
that's
also,
you
know
also
our
project
heartbeats.
B
A
So,
okay,
we
don't
have
new
members
in
this
meeting,
so
we'll
go
through
and
we
will
have
this
meeting
bi-weekly,
so
everyone
can
bring
their
topics
and
their
proposals
their
stories
to
the
meeting.
So
we
can
have
sync
discussion
and
everyone
else
will
learn
from
you.
A
And
feel
free
to
reach
out
to
us
in
in
slack
and
get
github
discussions,
so
we
released
3.1.
A
Finally-
and
you
can
refer
to
the
release
note
in
a
documentation-
there
are
a
bunch
of
improvement
and
bug
fixes,
but
I
will
walk
you
through
the
the
main
those
ones
I
consider
were
or
worthy
to
be
discussed
later.
I
will
do
that.
So,
yes,
I'm
going
to
walk
you
through.
A
A
So
the
main
part
of
this
release
are
some
improvement,
refactor
or
optimization.
A
The
first
one
is:
we
now
move
the
download
and
ingest.
So
these
two
are
our
functions
related
to.
We
can
generate
the
ssd
files
from
the
nebula
exchange.
If
you,
you
know
that,
so
from
that
exchange,
you
can
generate
underlying
files
which
can
site
load
can
be
silo
to
nabla
graph.
This
will
save
the
navigraft
cluster
from
you
know,
sorting
data
when
the
file,
when
the
data
was
writing
to
nabla
graph.
A
So
if
you
are
doing
like
billions
of
vertexes
ingest
in
batch
every
day,
so
you
can
do
it.
A
While
this
exchange
and
download
ingest
fashion
to
download
means
you,
you
will
download
the
ssd
files
in
hdfs
from
the
from
the
nablograph
cluster,
and
this
requires
you
have
the
hdfs
client
be
installed
in
the
storage
nodes
so
and
the
storage
nodes
will
fetch
the
files
into
the
the
local
disk,
and
then
you,
you,
give
an
ingest
to
make
this
sst
in
injection,
merging
jobs
being
started,
and
previously
both
of
about
two
jobs
will
be
done
in
your
console
session
in
a
sync
way,
so
you
are
blocked
there
and
if
your
session
is
lost,
there
will
be
interruption
of
this
long
run
execution.
A
Now
we
we
just
in
this
release.
We
bring
this
these
two
actions
to
managed
by
the
job
manager,
so
you
can
shoot
it
in
async
way.
The
demand
it
itself
will
be
released
in
in
short
time,
and
you
can
then
show
the
jobs
to
see
the
progress
of
given
ingesting
ingestion
works,
jobs
yeah.
A
So
next
one
is,
I
think,
it's
it's
quite
important,
that
a
lot
of
our
users
have
noticed
that
we
temporarily
removed
the
the
balance
of
the
of
the
cluster
function
out
from
3.0,
because
we
found
some
rough
implementations
not
perfect,
and
we
would.
We
don't
want
user
to
trigger
a
data
loss
in
the
in
from
the
3.0
and
in
the
in
this
release
we
bring
the
yes.
A
Welcome
can,
can
you
hear
me,
yes,
okay,
could
you
give
us
a
brief
introduction
after
I
finish
this
part
peter.
B
A
Okay,
thank
you,
so
I
actually
this
time
as
a
zero.
Only
me
so
welcome
to
our
community
meeting.
A
Yeah,
but
we
will
have
this
recorded,
archived
and
upload
to
youtube.
Maybe
some
others
will
benefit
from
it.
A
And
now
I'm
giving
a
I
I'm
giving
a
brief
introduction
of
those
improvements
introduced
by
3.0,
that's
worthy
to
be
mentioned,
so
just
begin,
it
just
began.
A
So
this
is
a
flag
named
er,
exp
experimental
feature
so
after
enable
that
some
of
the
experimental
features
can
be
enabled
for
now
we
have
two
of
them.
One
is
toss.
The
toss
is
the
transaction
over
storage
side,
so
it
will
sacrifice
some
of
the
writing
performance
to
have
more
transaction,
but
it
is
not
the
pure
acid,
but
in
single
command
perspective
it
will
be
communicated
from
the
story
side
in
some
some
sort
of
transaction,
the
other.
A
Where
are
the
very
important
ones,
that
you
can
do
the
balance
data
and
balance
state
leader
with
this
flag
being
enabled?
So
we
temporarily
disabled
the
balance
feature
from
3.0,
because
some
of
the
rafter
implementation
is
of
issues
in
rare
conditions,
but
still
we
risk
in
in
data
loss.
So
we
disable
that,
and
now
it's
it's.
It's
brought
back
with
this
experimental
feature
flag.
A
There
are
some
other
improvements,
so
in
the
left
side
there
they
are
open,
cipher
related.
This
one
is,
we
can
filter
based
on
the
patterns,
so
we
can
write,
queries
like
this.
You
can
filter
it
with
certain
mesh
patterns
and
another
one
is
previously.
We
don't
support
the
I
list
twice
more
than
once
so
now
this
was
fixed.
A
Another
thing
is
previously:
in
the
you
know,
the
variable
of
the
hoops
progressive
you
have
to
provide
a
max
hoop
if
you
specified
this
dot
dot,
and
now
you
it's
optional.
So
you
can.
You
don't
have
to
provide
the
the
upper
bond
of
this
hoops
and
we
have
a
sweet
command
named
clear
space
now,
which
means
you
will
clean
all
the
data,
but
the
schema.
So
this
is
an
example.
A
So
here
before
we
clean
the
space,
you
can
see
we
have
data
and
also
we
actually
have
the
index
created
and
after
you
clean
the
the
space
you
will
see,
all
the
data
is
are
gone
and
all
the
schema,
including
the
the
indexes,
are,
are
reserved
it's
not
cleaned
up,
so
it.
This
is
a
quite
handy
feature.
When
you
are
dealing
with
a
lot
of
test
environments,
you
you
don't
have
to
re,
delete
and
recreate
the
space.
A
Okay,
so
there
are
some
optimizations,
so
most
of
them
are
performance,
wise,
like
in
sub
graph
and
find
paths.
A
We
apply
more
operating
rules
to
improve
the
performance,
but
for
details
you
can
refer
to
the
corresponding
pr,
and
there
are
certain
test
cases
in
this
pr
and
we
also
optimize
the
path
of
operator
so
that
some
of
the
in
certain
cases,
some
of
the
redundant
passes,
will
not
be
fetched.
So
it
will
improve
the
performance
and
also.
A
We
optimize
the
get
props
method
from
the
storage
layer,
so
it's
more
optima,
and
so
that's
through
the
goal
and
the
yelp
clause
it
will
pro
avoid
extracting
redundant
properties
in
certain
cases
and
the
the
other
side
is
well.
We
we
have
some
more
progress
on
the
opera,
the
storage
push
down
for
certain
operators
for
get
props.
We
now
push
down
two
more
conditions.
One
is
the
filter.
The
other
is
the
limit
of
the
get
get
props.
So
when
possible,
this
limit
can
be
pushed
down
to
the
story
site.
A
It's
reduced
to
the
I
o
usability
and
the
other
one
is
aggregation
in
low
cup
is
now
pushed
down
and
there
are
some
other
improvements.
One.
The
first
one
is
actually
fix
certain
issues
when
you
have
multiple
data
paths.
A
This
is
a
database
internally
for
natural
graph,
not
the
one
in
in
operating
system.
So
if
you
configure
multiple
disks
to
storage
d
and
it's
it's
more
awares
with
this
configuration
in
certain
40
cases
and
the
job
manager
was
refactored
have
addressing
some
certain
scenarios.
Issues
and
another
improvement
is
previously,
we
didn't
make
the
query
quite
a
role-based
permission
check
so
now
with
the
authorization
with
more
fine-grained
control
of
this
query.
A
So,
ideally,
you
cannot
kill
other
spaces,
a
user
without
access
to
your
space.
They
cannot
kill
you
previously.
It's
not
prevented
yeah
there
are
a
couple
of
optimizations
are
actually
underlying
configuration
related.
The
first
one
is
we.
We
just
do
this
by
changing
the
default
configuration
value,
which
is
the
auto,
remove
invalid
space,
and
that
says,
if
you
just
dropped
a
graph
space
and
with
this
flag
being
set
as
true.
A
If
you
restart
the
cluster,
the
the
data
will
be
false,
will
be
removed
previously
just
be
marked
as
deleted,
but
it
will
be
only
removed
in
next
compaction,
but
this
flag
will
will
be
more
aggressive,
aggressively
policy
to
do
so.
You
know
a
lot
of
users
concerns
about
the
disk
usage,
so
it
is
pos
and
we
consider
this
to
be
by
default,
enabled
make
more
sense
this.
This
is
a
a
new
configuration
introduced
by
one
of
our
contributors,
so
literally
it
means
we.
A
We
now
can
limit
per
ip
per
users
max
sessions.
So
some
of
the
crazy
application
or
clients
will
not
make
the
whole
cluster
explored
with
the
tons
of
sessions
stocked
in
in
matadi.
A
This
is
another
flag
that
we
actually
bring
the
corresponding
feature
of
configure
configurable
flags
from
the
underlying
rocks
db.
So,
with
this
flag
being
set
true,
the
rough
it
means
rock
speed,
rock
db
will
disable
the
the
page
cache
of
the
operating
system.
A
So
if
we,
if
we
know
what
we
are
doing,
we
can
leverage
this,
especially
when
we
are
doing
some
tests
related
to
performance.
We
can
have
more
pro
precisely
regarding
the
memory
part
control-
and
this
is
a
another
default
change
configurations,
so
we
introduced
the
kv
separation,
if
I
recall
correctly
from
2.6
and
that's
a
great
choice
for
us
to
have
the
performance
improvement,
especially
when
our
properties
are
quite
large
by
default,
is
setting
to
false.
A
But
when
you
set
to
chew,
we
had
to
set
a
threshold,
saying
that,
after
you
reach
your
properties,
reach
to
to
this
threshold
size
in
bytes,
the
variable
we
will
set
will
be
persistent
separately
instead,
instead
of
connect
together
with
the
key.
As
we
know,
the
underlying
there
are
key
values
for
the
storage
side,
and
now
we
we
change
the
full
value
from
0
to
100
to
potentially
not
confuse
the
user.
A
So
if
they
set
flag
into
true
but
left
default
to
zero
the
it
it
will,
it
won't
be
I
the
optimal
in
most
of
cases,
because
if
your
variable
is
quite
small,
you
just
save
it
persistent
together
with
the
key
it
will
be
somehow
more
performant
actually
so
give
a
non-zero
value
makes
more
sense
another-
or
this
is
a
small
change
that
previously
we
hard
code,
the
the
max
depths
of
the
expo
expression
depth.
A
So
this
is
the
steps
not
is
not
really
directly
related
to
our
query
like
pipeline
or
waves.
It's
not
that
depth.
It's
like
it's
underlying
depth
of
the
graph
query
when
it's
passed
as
operators.
A
So
this
more
fine
grained
depth,
but
previously
we
just
gave
a
a
value
hard
coded,
but
in
in
rare
cases
some
user
will
reach
out
to
this
threshold
and
it
should
be
configurable,
and
this
is
this
pr
bringing
this
value
to
be
configurable
and
by
default
it's
like
512.
A
I
think
most
of
the
user
should
not
care
about
this
yeah
that's
most
of
the
the
improvements,
so
there
are
a
bunch
of
bug
fixed.
I
don't
think
the
words
to
be
described
in
this
meeting
so
but
be
sure
to
check
out
the
business
note
if
you
are
interested
regarding
the
upgrade.
So
we
have
a
up
db,
upgrader
utilization,
so
it's
actually
a
binary
files.
If
you're
done,
you
installed
a
nebula
from
a
binary
like
debian
or
rpm
package,
it's
not
included
in
the
container
image
by
default.
A
So
we
only
need
to
do
this
to
run
this
to
help
change
the
underlying
data
files,
because
we
will
bring
those
structure
change
between
the
major
versions.
But
if
like,
if
you
are
running
3.0,
you
want
to
upgrade
to
3.1.
All
you
need
is
to
replace
the
package
replace
the
binary
files.
So
that's
all
you
need,
but
in
case
you
are
upgrading
from
the
previously
versions
like
2.0
or
2.6.
A
You
have
to
do
leverage
this
this
utility,
but
be
sure
to
follow
the
procedures
in
documentation
and
yep,
and
that's
that's
all
of
the
core,
the
core
database
perspective
and
in
last
three
cycle
we
have,
we
don't
have
too
much
update
on
the
surrounding
toolings.
One
of
them
was
to
be
mentioned,
is
the
kubernetes
operator.
So
we
support.
A
Actually
we
support
this
for
for
some
time,
but
but
in
in
the
beginning
of
3.0,
we
didn't
support
it,
and
now
the
kubernetes
operator
supports
report,
oh
already,
like
months
months
before,
and
also
in
our
nebula,
contribute
github
org.
We
have.
We
have
a
new
project
called
graph
ocean,
it's
contributed
by
one
of
our
community
contributors
from
jd,
so
it's
a
java
or
rm.
A
So
if
you're
interested
in
that
be
sure
to
check
the
the
repository,
so
the
only
one
surrounding
tooling
that
open
that
which
is
open
source
that
was
to
be
mentioned
with
some
news-
are
the
studio.
So
there
are
a
bunch
of
new
features
and
and
refactors
on
this
project.
So
now
we,
for
example,
two
of
them
are
major
ones.
A
We
support
multi-task,
sync
import
and
now
you
can
view
the
progress
logs
of
those
tasks
etc,
and
now
we
have
a
gui
based
wizard
based
importer.
We
wrapped
the
nebula
importer
with
the
the
studio
to
make
it
like
a
wizard.
So
you
don't
have
to
compose
your
long
bible
long
yamo
file.
Instead,
you
can
do
it
with
clicks
if
you
prefer,
and
now
it
supports
some
sort
of
templates
so
be
sure
to
check
out
that
feature.
A
So
I
think
that's
that's!
That's
all
of
them
yeah.
So
we
we,
we
finished
finish
that
part.
So
actually
that's
most
of
the
today's
topics
I
would
like
to
share
and
the
tiger
welcome.
If
you
are
a
convertible,
would
you
mind
introduce
yourself
to
us?
Yes,.
B
B
Let's
jump
and
see
what's
going
on
since
I'm
a
gold
developer,
I
start
to
look
into
they
go
driver.
I
read
the
fact
submit
few
with
my
requests.
A
B
B
A
Thank
you.
Well
I'm!
So
it's
my
pleasure
to
have
you.
Thank
you
so
much,
okay,
so
anytime,
if
anything,
just
you
can
just
ping
me
in
in
slack
yeah,
and
I
saw
your
issues
that
you
will
want
to
do
something
around
the
goal.
Client
right.
B
B
I
must
read
a
little
bit
more
about
documentation
and
use
more
the
enabler
db
to
fully
understand
this
solution
in
our
company.
We
have
some
use
cases
for
this,
and
this
can
be
one
of
the
available
solutions.
So
this
is
what
why
I
am
so
excited
to
be
part
of
this.
B
A
B
It's
it's.
This
is,
I
think
the
most
important
is
happenings
straight,
like
okay,
there
is
a
code
that
works,
and
this
is
the
most
important
the
rest.
B
It's
experience
like
how
we
maintain
this,
how
evolve
documentation,
examples,
etc?
This
is
tiny
bits
that
perhaps
we
need
a
vision
from
from
the
exterior
to
see
okay.
A
A
B
A
Our
slack
and
we
have
manage
the
risk
just
launched
a
couple
of
weeks
before
and
it's
in
open
beta.
So
if
you're
interested
just
check
from
the.