►
From YouTube: GitLab 14.10 Kickoff - Enablement:Memory
Description
Kickoff for the Memory Group for the GitLab 14.10 release
Planning issue: https://gitlab.com/gitlab-org/memory-team/team-tasks/-/issues/111
Memory Group Past Kickoff Videos: https://youtube.com/playlist?list=PL05JrBw4t0Kq1HDOIfQ8ov6lfyJkWK2Yr
Presentation by: Yannis Roussos, Sr. Product Manager, Memory and Database Groups
A
As
a
quick
reminder,
our
main
goal
in
the
memory
group
is
to
continuously
identify
the
bottlenecks
that
affect
gitlab's
performance
and
availability
and
address
them.
At
the
same
time,
we
work
towards
providing
better,
tooling
metrics
and
visibility
development
teams
with
a
goal
to
allow
them
to
make
better
data-driven
decisions
when
developing
new
features
or
iterating
on
existing
ones.
So
let
me
take
you
through
our
top
priorities.
Our
first
top
priority
is
improving
the
efficiency
and
maintainability
of
application,
metrics
exporters.
A
On
one
hand,
we
have
various
inapp
exporters
that
are
part
of
their
core
rails
codebase
and
at
the
same
time
we
also
have
gitlab
exporter,
which
is
an
independently
deployed,
collector
and
server.
This
approach
has
two
major
issues.
A
We
have
two
separate
code
bases
to
maintain
and
optimize
and
at
the
same
time,
even
more
importantly,
though,
the
current
exporters
face
numerous
efficiency
challenges.
As
a
as
an
example,
we
know
that
the
ruby
has
known
inefficiencies
when
dealing
with
data
heavy
workloads
and
cannot
efficiently
parallelize,
inherently
parallel
problems.
A
So
our
goal
here
is
to
replace
our
existing
exporters
with
a
single
new
application,
exporter
system
that
will
run
outside
of
the
rails
monolith
and
that
will
perform
efficiently
in
the
face
of
large
data
volumes,
and
I'm
talking
about
tens
or
hundreds
of
thousands
of
samples
per
spread.
To
do
so.
We
have
two
tasks.
A
The
first
one
is
move
all
our
exporters
outside
of
the
core
application
processes
so
so
make
them,
run
them
as
independent
process,
independent
servers
and
after
we're
done
with
that,
we
can
replace
all
of
them
with
a
new
exporter
that
will
be
way
more
efficient.
So,
as
a
quick
recap,
we
have
already
done
so
for
the
site
for
sidekick.
A
We
will
have
this
fully
released,
so
the
last
step
remaining
is
to
create
a
new
exporter
and
we
have
decided
to
use
golan
because
golem
has
is
very
well.
It
has
both
improved
performance
and
has
way
better
characteristics,
respect,
concurrency
and
addressing
parallel
problems.
A
Our
expectations
is
that
when
we
are
done,
we
will
have
a
much
more
efficient
performance
system
and,
at
the
same
time,
more
reliable
way
of
sending
application
metrics
to
promises.
Our
second
priority
is
relates
to
performance
related,
tooling.
We
have
focused
during
the
past
two
months
on
facilitating
our
profiling
tools.
A
A
We
have
decided
to
keep
a
few
really
useful
ones,
and
we
have
moved
some
of
the
existing
functionality
to
performance
bar
as
well.
So
the
last
remaining
step
to
do-
and
it
will
be
done
in
40.10-
is
rework
our
documentation
that
relates
to
performance
profiling
as
well.
A
Here
consolidate
all
the
guidelines
that
we
have
in
the
various
documents
that
we
have
for
performance
profiling
and
also
create
new
guidelines
that
will
help
step
by
step,
will
help
developers
address
performance
and
do
performance
profiling,
while
guiding
them
through
the
policies,
for
example,
if
there
is
that
a
specific
symptom,
how
to
verify
it.
What's
the
first
tool
to
start
with
how
to
interpret
the
results,
etc.
A
Our
next
priority
relates
to
ruby,
zip.
We
have
found
that
ruby
can
run
against
performance
issues
when
directly
iterating
through
zip
files
with
a
lot
of
files
inside
them.
So
we
are
going
to
explore
ruby's
alternatives
to
solve
a
few
problems
we
have
found
with
rub
zip.
This
will
should
be
done
by
the
end
of
40.10
and
then
our
next
priority
is
to
optimizing
workers
that
consume
a
lot
of
memory
and
can
cause
out
of
memory
conditions.
A
So,
while
working
on
addressing
a
few
saturation
problems
without
in
our
production
saturation
events,
I
found
that
we're
regularly
hitting
out
of
memory
conditions
with
more
than
a
thousand
out
of
memory,
kills
observed
on
sidekick
containers
per
day.
A
So,
while
investigating
this,
we
found
that
there
are
a
few
background
workers
that
consume
more
than
one
gigabyte
of
memory.
Occasionally,
that's
a
lot
of
that's
a
lot
of
memory,
and
some
of
those,
as
you
can
see
here,
can
go
even
above
five
gigabytes,
so
we
have
identified
the
ones
that
are
killed.
The
most
often
cannot
complete
due
to
out-of-memory
conditions
already
in
40.9.
A
We
have
addressed
issues
with
a
worker
that
relates
to
pipeline
artifacts,
and
our
plan
unit
40.
10,
is
to
evaluate
the
rest
of
the
workers
and
see
if
there
is
something
that
they
do
and
how
to
address
it,
either
on
the
memory
group
side
or
on
the
on
the
features.
The
future
groups
that
own
those
workers.
A
Finally
I'll
our
last
but
not
least,
priority
is
update
the
ruby
version
that
we
use
in
gitlab
to
3.0.
So
this
is
this
is
a
a
very
large
project.
We
have
already
worked
on
that
in
the
past
and
we
had
to
pause
it
for
a
while
to
address
to
work
on
other
top
priorities,
but
we
want
to
go
back.
This
is
important
because
in
a
year
from
now
on
march,
2023
is
the
end
of
life
of
ruby
2.7,
which
is
our
the
current
ruby
version
that
we
use
in
gitlab.
A
So
we
want
to
as
quickly
as
possible
to
make
sure
that
moves
all
our
code
base
to
use
ruby3
and
unblock
any
other
issues
we
have
may
have
with
libraries,
etc.
So
that's
all
for
4.10!
Thank
you
for
watching
and
talk
to
you
next
month.