►
From YouTube: GitLab 14.9 Kickoff - Enablement:Memory
Description
Kickoff for the Memory Group for the GitLab 14.9 release
Planning issue: https://gitlab.com/gitlab-org/memory-team/team-tasks/-/issues/109
Memory Group Past Kickoff Videos: https://youtube.com/playlist?list=PL05JrBw4t0Kq1HDOIfQ8ov6lfyJkWK2Yr
Presentation by: Yannis Roussos, Sr. Product Manager, Memory and Database Groups
A
Before
I
continue
with
our
planning,
I
would
like
to
provide
a
very
quick
background
on
what
the
memory
group
does.
Our
main
goal
is
to
continuously
identify
the
bottlenecks
that
affect
github's
performance
and
availability
and
address
them,
either
on
our
own
or
in
collaboration
with
the
teams
that
are
directly
responsible
for
those
areas
and
have
the
domain
specific
knowledge.
We
also
work
towards
providing
better,
tooling
metrics
and
visibility
to
development
teams,
with
a
goal
to
allow
them
to
make
better
data-driven
decisions
when
developing
new
features
or
iterating
on
existing
ones.
A
So
on
our
top
priorities
for
4.9.
First
of
all,
is
working
performance
related
continue
working
on
performance-related,
tooling.
As
already
discussed,
we
want
to
enable
every
gitlab
team
member
to
make
data-driven
decisions
and
understand
performance,
related
issues,
memory,
consumption
or
cpu
saturation
events
that
require
sharp,
tooling
and
clear
simply
to
understand
performance
guidelines.
A
So
in
40.9
we
will
continue
with
working
on
consolidating
our
profiling
tools.
We
provide
an
extended
set
of
tools
for
performance
profiling
with
overlapping
functionality
and
sometimes
no
clear
guidance
on
when
and
how
those
tools
should
be
used.
So
we
plan-
and
we
want
to
condense
them
into
a
more
manageable
toolbelt
and
provide
clear,
well-defined
guidelines
for
developers.
A
So
in
40.8
we
have
already
reviewed
a
lot
of
tools.
We
have
decided
to
keep
some
of
those.
We
have
archived
or
removed
some
others,
and
we
want
to
continue
doing
some
with
40.9
and
cover
all
the
tools
that
we
provide
inside
gitlab.
We
have
also
deprecated
the
request
profiling,
which
is
a
feature
available
for
gitlab
administrators.
It
will
be
removed
from
gitlab
in
15.0.
A
This
is
a
feature
that
the
profile
link
provided
by
this
feature
is
partially
already
covered
by
in
our
performance
bar.
There
is
one
missing
part
which
we
also
work
on
moving
it
to
the
performance
bar
the
memory
profiler.
A
This
is
a
feature
that
we
have
enabled
starting
on
42.8,
but
at
the
moment
it's
only
named
for
development,
environment
environments,
because
we
have
some
security
concerns
about
profiling
production
workloads.
We
will
keep
called
on
it,
review
it
and
make
a
decide
whether
we
can
also
enable
it
for
production
environments
so
that
it's
also
available
for
gitlab
instance
administrators.
A
Finally,
unfortunate
in
40.9
after
we
consolidate
all
our
tools.
We
also
want
to
work
on
our
tools,
profiling
tools,
documentation
and
our
guidelines
and
provide
clear,
clear,
easy-to-understand
guidelines
that
will
explain,
for
example,
if,
if
you
have
that
symptom,
what's
how
to
verify
it,
what
first,
what
tool
to
use
to
start
working
on
need
how
to
interpret
the
results?
How
to
make
a
decision,
et
cetera,
et
cetera,
et
cetera,
and
our
next
top
priority
is
improve
the
efficiency
and
maintainability
of
application
methods
exposures.
A
A
So
our
target
there
is
to
make
the
whole
process
more
reliable
and
more
stable
and
fix
a
few
issues
and,
of
course,
more
performance.
So
our
plan
in
general
for
this
initiative
is
to
implement
a
new
integrated
way
of
exporting
matrix
that
provides
a
single
application
for
the
system.
At
the
moment
we
have
more
than
multiple
ones
runs
outside
of
the
rails
monolith
and
performs
efficiently
in
the
face
of
large
data
volumes,
tens
to
hundreds
of
thousands
of
samples
per
spray.
A
Site
metrics
are
the
background
workers,
and
now
we
are
looking
into
extracting
the
metric
servers
also
for
proma
our
web
server
into
a
separate
server
process,
so
that
we
can
improve,
in
general
fault,
orders
and
gitlab
availability.
A
So
our
plan
in
general
is
to
to
separate
those
processes
so
that
we
can
then
find
ways
to
integrate
them
into
a
single
process.
So
the
this
specific
endeavor
for
the
puma
is
in
response
to
incidents.
We
have
seen
the
past
where
they
process
drug
server,
the
one
that
is
responsible
for
sending
the
metrics
can
lock
up
the
entire
process
and
cause
puma
to
to
lock.
A
So
this
is
what
we
are
going
to
work
on
4.9
once
we
have
start
extracted
all
those
metric
servers.
The
next
step
in
future
versions
will
be
to
replace
them
with
a
single
exporter
system,
and
we
have
already,
in
the
past,
milestone
and
created
the
prototyping
go
which
behaves
more
or
less
like
what
we
have
today,
but
it
is
80
times
faster,
while
using
a
similar
amount
of
memory.
A
A
Our
assumption
is
that
the
linux
out
of
memory
killer
is
killing
our
workers,
as
the
carrier
is
not
able
to
fulfill
a
request
to
allocate
more
memory,
so
we
have
during
14.8.
We
have
investigated
the
top
offending
workers
and
we
have
found
a
few
that
sometimes
can
consume
more
than
one
gigabyte
of
memory.
A
Sorry
is
to
work
on
optimizing
several
of
those
top
offender
workers.
We
will
start
with
the
coverage
report
working
4.9
and
work
through
all
of
them.
The
our
goal
is
to
reduce
this
memory
consumption
so
and
hopefully
we
will
reduce
the
the
number
of
workers
that
are
killed
due
to
out
of
memory
events.
A
Finally,
our
final
top
priority
is
the
composable
code
base.
This
is
the
way
we
call
the
project
of
splitting
the
gitlab
application
into
functional
parts,
duplication
level
layers
so
that,
for
example,
using
rails
engines
or
something
like
that,
so
that
we
can
ensure
that
only
the
needed
code
is
loaded
it's
time.
This
is
very
important.
A
This
is
a
very
big
project,
so
we
are
still
evaluating
it
because,
but
if
we
were
to
do
so,
that
will
result
in
lower
memory
usage
even
up
to
30
percent,
shorter
application
boot
up
times,
even
up
to
20
seconds
faster
and
improved
responsiveness
of
background
workers
due
to
much
shorter
garbage
collection
cycles
and
much
more.