►
From YouTube: Enablement:Global Search - GitLab 14.6 Kickoff
A
So
the
first
thing
we
are
really
focused
on
two
different
objectives.
As
we
look
at
14.6
in
global
search,
we
want
to
make
some
incremental
improvements
to
the
ux
and
front
end
as
well
as
really
focus
in
on
performance
improvements,
specifically
looking
at
how
we
can
increase
the
performance
and
reliability
of
of
gitlab.com
as
a
sas.
A
A
A
The
next
thing
is
something
that
we
kind
of
saw
can
be
a
little
bit
confusing
for
customers.
Sometimes
when
you
are
searching-
and
you
get
a
option
to
choose
that
search
term
in
the
autocomplete
as
it's
focused
in
on
a
group
or
across
all
gitlab,
and
the
same
thing
happens
when
you're
looking
at
project.
You
can
then
also
get
an
offer
to
search
that
term
specific
to
the
group.
A
Those
contexts
are
really
helpful,
but
they
can
be
a
little
bit
confusing
because
the
text
runs
together,
so
we
are
looking
at
breaking
it
out
to
design.
It
looks
like
this,
where
you
can
have
the
group
and
the
list
of
the
project
out
to
the
right-hand
side,
and
then
it
would
also
be
reflected
in
the
search
bar
as
well,
if
you're
searching
in
that
context
already.
A
So
this
actually
condenses
the
view
a
little
bit,
but
it
also
creates
a
better
separation
across
the
alignment
where
these
group
and
projects
are
listed
so
that
it's
not
confusing
or
blends
in
with
the
search
term.
That's
chosen,
so
the
expectation
is,
it
will
encourage
more
users
to
actually
drive
into
their
scope
or
group
from
the
initial
search,
which
should
lead
to
a
better
experience.
B
Thank
you,
john
one
of
performance
improvements
we
are
going
to
do
in
14,
6
is
to
move
commits
to
its
own
electrical
structure
index.
B
B
A
Yeah,
so
I
think
I
want
to
drive
into
this
just
a
little
bit,
because
I
think
it's
really
interesting
that
you
know
if
I'm
I'm
changing
how
we
index
a
type
of
document,
but
it's
actually
going
to
speed
up
the
performance
when
I
go
to
recall
those
documents
right.
So
it's
even
though
it's
an
index
change
we're
not
really
trying
to
speed
up
how
fast
we're
indexing
it.
That
seems
to
be
okay
right,
it's
actually
about
how
you
call
that
information
back
from
the
index
right
right,
and
so
it
gets
really
interesting
on.
B
A
B
A
If
you
have
a
really
large
index
that
throws
everything
into
it,
it's
got
to
search
across
lots
and
lots
of
more
charts,
and
so,
if
we
break
this
down
into
smaller
indices
with
less
shards
and
that
scope
is
represented
to
only
need
to
search
across
those
shards,
the
difference
is
in
the
performance
is
seen
because
you're
you're
not
searching
across
hundreds
of
shards
you're
searching
across
tens
of
shards
to
try
to
build
that
result
set
back
and
and
ultimately,
it
makes
for
smaller
shard
sizes.
A
B
Yeah,
I
think
it's
also
helped
improve
the
reliability.
So
if
you
have
everything
into
one
index,
if
you
write
into
issue
with
index,
basically
everything
will
be
affected
now
by
splitting
the
index
by
the
document
type.
If
you
have
issue,
maybe
with
one
index
the
other
index
can
still
be
functioning.
A
Yeah,
that's
a
that's
a
good
point.
I
didn't
think
about
this,
but
you
know
we
also
kind
of
saw
this
as
we
did
this
with
issues.
Another
unintended
benefit
that
we
definitely
appreciated
was,
as
we
started,
adding
more
features
and
needed
to
change
how
something
was
indexed,
calling
us
to
read
index
it.
We
didn't
have
to
reprocess
everything
in
the
index
we
just
had
to
reprocess
issues,
which
was
extremely
small
compared
to
what
the
code
index
was
and
instead
of
spending
several
hours
or
maybe
even
a
day
trying
to
do
a
re-index.
A
We
were
able
to
re-index
issues
and
I
think
it
was
like
15
minutes
with
the
new,
the
new
field
that
we
needed,
which
is
you
know
it's
a
it's.
Not
it
wasn't
one
of
our
intended
advantages,
but
it
does
support
that
there's
a
lot
of
other
advantages
that
really
come
to
this,
this
concept
of
breaking
out
that
index,
and
so
there's
a
lot
of
value
that
we'll
we'll
continue
to
see
as
we
we
move
down
the
path
of
breaking
out
the
index
into
these
individual
scopes.
B
A
I
think
I
want
to
jump
to
something
since
we're
kind
of
on
that
track
around
a
spike
that
we're
taking
on
too
right.
Yes,.
B
So
this
one
is
to
explore
other
routing
and
sharding
strategy
right
now
our
shard
is
built
around
the
projects.
So,
basically,
all
the
data
is
shared
by
the
project
id.
It
provides
a
very
fast
search
experience
for
the
project
level
search.
However,
we
also
provide
the
functionality
which
allows
users
to
search
within
a
group.
B
B
Now
we
are
exploring
another
strategy
to
shard
our
data
by
group.
We
hope
this
will
yield
a
good
result
on
a
group
level
search.
A
Yeah,
so
that's
you
know,
it
goes
right,
along
with
how
we
were
discussing
breaking
out
that
index
right.
So
this
is
more
into
what
determines
how
you're
trying
to
keep
information?
That's
similar
in
the
same
shard
and
there's
lots
of
ways
elastic,
tries
to
balance
this
out.
Yeah.
B
A
The
routing
key
to
try
to
do
this,
you
know
we
we
force
the
name
space.
Is
that
right.
B
A
Yeah
as
the
routing
key
yes,
because
it
seems
logical
that
a
user
especially-
and
you
know,
self-managed
as
well
as
sas,
you
know
if
that
you're
going
to
be
working
more
times,
trying
to
search
across
things
in
the
same
the
same
group,
and
then
you
know
what
we're
looking
at
now
is,
if
there's
a
a
better
way
to
speed
this
up.
So
you
know,
we
don't
know
right.
That's
why
it's
a
spike.
We
want
to
see
if
there's
a
better
way.
A
We
think
there
is
a
better
way
and
I
really
look
forward
to
seeing
what
the
results
look
like
for
this
and
anytime
you
can
improve.
You
know
your
document,
routing
strategy,
that's
a
global
impact.
It
has,
you
know,
far
reaching
impacts
and
advantages
across
every
aspect
of
that
you're
using
to
call
elasticsearch
for
so
it's
actually
really
exciting.
A
If
we
can
come
up
with
a
result,
either
way,
but
definitely
exciting,
we
can
come
up
with
a
result
that
that
is
superior
to
what
we're
doing
today
and
moving
into
the
idea
of
what
we
do
for
you
know
in
the
experimenting
round
right
of
how
we're
trying
to
find
better
ways
to
improve.
B
Since
we
just
talked
about
the
spy,
the
spec
is
about
to
explore
other
sharding
strategies.
Now
it
comes.
It
also
comes
to
a
question
how
we
can
determine
whether
the
new
sharding
strategy
will
be
better
than
existing
one.
Now
we
need
a
tool
to
do
the
benchmarking.
For
us,
we
think
riley
is
a
industrial
standard
tooling,
for
benchmarking,
the
elastic
search
performance
will
try
to
build
a
really
testing
and
benchmarking
framework
with
e-like
search
at
this
lab.
A
Yeah-
and
I
think
we
you
know-
we
touched
in
and
did
our
first
internal
demonstration
on
using
rally
and
the
in
the
last
milestone
and
as
we
were
learning
more
about
the
values
it
could
create
it.
We
were
impressed,
I
think
right.
A
Like
I
mean
I
was
impressed
at
least
I
really
saw
it
as
you
know,
a
tool
that
you
know
really
does
simplify
how
we
can
go
about
trying
to
test
comparable
versions
related
to
performance
testing,
and
I
think,
eventually,
we'll
probably
even
get
into
ways
to
improve
the
quality
of
the
results
that
come
back
as
well,
and
you
know
it
seems
like
it
has
enough
pieces
built
in
that
it
saves
us
a
lot
of
time
by
using
it,
rather
than
trying
to
explore
our
own
methods
of
of
creating
those
comparable
testing
scenarios,
so
it
at
least
at
this
point.
A
We
definitely
want
to
keep
exploring
how
we
can
keep
learning
down
that
path
and
be
able
to
increase
the
variables
that
we'll
be
able
to
test
in
a
faster
method.
So
I'm
pretty
excited
about
where
this
could
take
us
over
the
next.
You
know
a
few
milestones
as
we
really
get
it
into
practice
and
and
learn
faster,
the
things
that
it
can
really
help
us
test.
A
And
if
we're
making
all
these
performance
changes
right
like
it,
it's
also
important
that
we're
focused
on
continuing
the
quality
elements
and
how
we're
actually
looking
at
at
providing
a
qa
standard
to
ensure
a
high
level
of
quality,
and
that's
where
the
next.
The
next
couple
issues
come
from.
A
Yeah-
and
I
think
this
one's
actually
pretty
interesting
right
because
we
as
we
did
this,
we
we
kind
of
discovered
that
the
docker
hub
version
of
elasticsearch
does
not
necessarily
always
keep
up
with
the
most
recent
versions,
so
elasticsearch
released
7,
15
1
at
a
time
in
this
video
they
actually
a
few
days
ago.
It
released
7
15
2.,
but
at
least
we
want
to
keep
up
with
the
latest
version
available
through
docker
hub,
which
you
know,
as
of
yesterday
at
least
is
7
14,
2.
and
so.
B
A
A
Yes,
yeah
and
that's
something
that
we
we
hope
to
include
in
a
a
future
milestone
with
some
updates,
but
at
this
point
we're
still
talking
about
what
we
want
to
do
as
an
alternative,
and
so
for
for
at
least
this
milestone
we'll
be
focused
on.
You
know,
upgrading
the
version
to
to
be
the
latest
available
from
docker
hub,
and
that
takes
us
through
some
of
the.
A
You
know
a
lot
of
the
engineering
effort
that
we're
doing
coming
up
now
and
gitlab
14.6,
and
we
really
look
forward
to
seeing
the
ux
changes
that
we're
we're
looking
at
as
well
as
we
incrementally
evolve
and
make
that
experience
easier
and
better
to
use,
and
we
look
forward
to
giving
even
more
feature,
updates
and
and
details
on
our
engineering
efforts
as
we
go
into
fourteen
seven.