►
From YouTube: GitLab 13.1 Kickoff - Enablement:Memory
Description
Learn more about what the Memory group is working on for GitLab 13.1.
A
The
first
three,
as
you
can
see,
are
improving
the
performance
of
certain
endpoints,
certain
back
and
end
points
of
the
get
application.
The
first
two
are
related
to
the
file
blame
work,
the
problem
api
and
the
overall
controller
both
of
these
perform
quite
slowly
and
on
larger
projects.
These
these
can
actually
timeout
before
returning
and
present
some
problems,
and
so
we're
looking
to
go
ahead
and
make
some
significant
improvements
to
these.
In
this
particular
release.
A
A
Next
up
is
that
we
are
looking
to
improve
our
understanding
of
how
our
self-managed
customers,
their
instances
are
performing.
We
can
get
a
lot
of
information
from
gitlab,
calm
understand
how
its
performing
understand
what
the
challenges
are
and
where
we
can
optimize,
but
we
have
very
little
to
no
operational
insight
into
how
our
self-managed
instances
are
performing,
and
so
many
cases,
while
we're
looking
identifying
some
of
these
problematic
end
points
and
understand
the
opportunity
to
improve
the
get
lie
performance
as
a
whole.
A
We
only
have
a
lot
of
good
data
coming
from
comm,
and
so
we
want
to
try
and
go
ahead
and
get
select
metrics
from
our
self-managed
customers,
where
we're
able
to.
Of
course,
you
can
turn
off
the
instrumentation
on
get
lab,
self-manage
and
so
I
think
that's
when
some
concerns.
You
can't
turn
this
off.
However.
A
So
you
can
understand,
for
example,
memory,
consumption
and
things
like
the
apology,
so
you
can
see
some
of
the
areas
that
we're
looking
to
go
ahead
and
be
able
to
pull
back
the
first
one
that
we
want
to
try
and
do
here
is
we
want
to
explore
adding
Prometheus
as
a
data
source
for
the
usage
pin
a
Prometheus
is
how
we
already
instrument
many
of
the
operational
metrics
as
part
of
gitlab,
and
so
when
you
look
at
a
get
lab
dashboard
to
get
out
a
comp
book
dashboard,
that's
all
coming
from
Prometheus
and
I.
A
Imagine
many
of
our
self-managed
initiators
today
are
utilizing
Prometheus
to
again
understand
how
their
instances
are
performing.
So
this
content,
Lamoni
cases,
is
already
being
generated
by
the
application
it's
just
being
generated
into
Prometheus
order,
monitoring
tool.
If
we
can
tap
into
that
again
in
a
very
controlled
way
and
pull
that
data
into
the
usage
thing,
for
example,
how
memory
is
it
consuming?
What
nodes
are
out
there?
How
many
are
there
what
how
they
can
figure
to?
A
Does
it
map
up
to
one
of
our
reference
architectures,
and
so
this
is
all
very
important
information
for
us.
That'll
help
us
build
a
better
product
in
the
future
and
so
Prometheus
again
as
a
source.
That
already
has
all
this
stuff
is
a
great
place
for
us
to
look
at
the
tapping
into
again
and
leveraging
all
that
already
great
inspiration
that's
been
done
rather
than
trying
to
instrument
it
again
in
our
Ruby
code.
A
So
this
is
the
first
step
that
try
and
pull
those
information
in,
and
it
sort
of
provide
more
more
visibility
into
this
current
blends
that
we
have
on
self-managed.
So
this
is
a
pretty
key
one
for
us
here.
This
release
we're
also
looking
to
provide
better
instrumentation
of
what
we
call
our
North
Star,
metrics
or
stage
monthly
active
user
metrics.
A
These
are
actions
that
our
users
perform
and
largely
are
based
off
the
YouTube
thing
today
and
what
this
does
is
it
tells
us
how
many
times
a
user
is
using
a
certain
feature
right
how
many
pipelines
were
created?
How
many
times
did
you
open
the
mr
page?
Things
like
that
again,
not
per
users,
but
it
can
help
us.
Give
us
like
unique,
active
users,
and
so
we'd
like
to
do
is
to
do
a
better
job
of
reporting,
the
time
it
takes
to
complete
some
of
these
key
actions
right.
A
And
so,
as
our
teams
are
looking
at
prioritizing
their
backlogs,
we
can
have
a
better
understanding
of
the
overall
performance
of
some
of
these
jobs
to
be
done,
rather
than
some
of
the
components
that
controllers
or
a
certain
page
load
speed
which
can
give
you
only
a
partial
picture
of
the
overall
performance.
So
by
having
an
over
idea
of
the
time,
it
takes
to
complete
a
job
and
get
lab,
and
these
particular
very
important
jobs
which
I'll,
probably
miss
I'm,
called
out.
A
We
can
hopefully
again
arm
our
teams
to
have
more
knows
that
henna
takes
and
hopefully
have
better
part
is
a
ssin
of
the
overall
performance
then
sometimes
put
a
piece
together
these
pictures
of
performance,
which
can
make
it
harder
to
build
a
case
for
getting
some
of
these
prioritized.
So
this
is
focused
on
hopefully
against
solving
that
visibility
gap
for
our
broader
product
team
and
engineering
teams
and
hopefully
solving
the
long
tail
of
Carta
zation
of
performance
problems
and
making
sure
we
are
incorporating
them
into
our
practices
matrix
at
the
appropriate
place.
A
Finally,
we
have
some
further
memory
optimizations
that
were
working
on,
namely
that
we
can
likely
reduce
the
amount
of
memory
consumption
that
sidekick
requires
by
pre
loading
before
forking.
So
right
now
we
fork
very
very
early
on
before
we
even
I
think
start
the
process,
that's
so
effectively
little
to
no
memories
actually
shared.
A
If
we
can
actually
preload
psychic
and
then
fork,
we
can
actually
get
a
lot
of
the
benefits
of
shared
memory,
as
you
can
see
here
in
this
updated
chart,
and
this
will
allow
us
to
save
about
300,
Meg's
per
sidekick
process
and
so
on
larger
instances
with
more
background
processing,
sidekick
jobs.
This
can
be
a
pretty
significant
savings
in
memory
consumption,
and
so
this
is
a
great
way
for
us
to.
A
Hopefully
again
we
improve
the
efficiency
in
the
application
and
also
just
continue
to
hopefully
improve
performance
as
well,
and
so
that's
the
the
main
topics
here
for
our
memory
team
in
13.1.
If
you
have
any
questions,
you
can
definitely
come
find
us
in
the
issue
tracker.
You
can
find
our
issues
of
the
group
memory
label
and
please
let
us
know
what
you
think
and,
of
course,
open
up
new
issues.
If
you
find
new
performance
problems,
we
have
identified
that
you'd
like
us
to
work
on.