►
From YouTube: GitLab 13.2 Kickoff - Enablement:Memory
Description
13.2 Kickoff for the Memory team
A
Hi
I'm
Josh
Lambert,
a
group
product
manager
here
at
gitlab
and
today
I,
have
to
talk
about
what
we
are
working
on
for
our
13.2
release,
which
will
be
available
on
July
22nd.
So
about
a
month
from
now
we
are
still
finishing
up
13.1,
as
you
can
see
here.
Most
of
these
will,
we
think,
make
it
but
there's
a
chance
that
some
of
these
will
carry
over
into
13.2
if
we
don't
and
in
general
we've
been
working
on
some
performance
improvements,
for
example
the
one
that
may
or
may
not
make
it.
For
example.
A
Here
is
a
improvement
to
our
file.
Blame
api
performance.
We've
already
made
some
improvements
to
file
them
in
general
and,
if
they're
released,
but
there
are
some
further
improvements
that
we
are
working
on
here
in
and
in
this
particular
issue
that
may
or
may
not
carry
over.
So
some
of
those
things
we
might
continue
to
work
on,
but
we
also
have
a
whole
bunch
of
exciting
things
here
for
13.2
as
well.
A
Things
like
how
many
requests
per
second
of
processing
and
things
of
that
nature.
So
we're
trying
to
continue
with
this.
So
we
understand
where
we
need
to
optimize
what
types
of
machines
you
are
using,
for
example,
how
important
is
it
to
try
to
reduce
the
American
assumption
versus
to
improve
the
performance?
We
know
all
these
things
are
important,
we're
trying
to
prioritize.
You
know
impactful
things
first,
in
having
that
telemetry
will
help
us
make
informed
data-driven
decisions
to
make
sure
that
we
are
delivering
the
most
impact
for
our
users
as
early
as
it
possibly
can.
A
Although
we
certainly
want
to
do
these
things
here
with
my
team
in
the
future,
so
this
instrumentation
will
help
us
again
understand
the
performance
characteristics
of
gitlab
across
the
broader
self-managed
installations.
So
we
can
help
make
it
lab
faster
for
everyone,
so
you
can
see
things
like
these
things
on
hands.
First,
couple
of
issues
further
on
down.
We
also
are
working
on
improving
the
blob
controller.
A
This
is,
for
example,
what
shows
you
various
types
of
files
and
also
things
like
markdown,
rendering
it's
actually
quite
important
and
called
pretty
frequently
actually
and
we've
been
doing
some
work
on
this
in
particular,
you
can
taste
see
here
some
of
the
flame
graphs
that
we've
been
doing
as
far
as
a
research
and
we've
actually
found
some
some
challenges
here
and
particular
around
how
often
it
ends
up
having
to
call
some
some
XML
processing
in
the
course
of
rendering
work
down.
We've
made
some
changes
here.
They
will
land
in
the
early
part
of
13.2.
A
We
are
very
excited
about
them
after
the
changes
you
can
see
here
that
the
rendering
time
for
a
markdown
file,
but
from
this
case
our
test
file
38
seconds
down
to
19
seconds,
essentially
a
50%
improvement
over
the
total
time
and
in
particular
the
amount
of
time
we
spend
in
markdown
rendering
has
gone
down
dramatically,
as
you
can
see
here,
for
example,
in
said
eleven
point,
seven
seconds
it
took
under
half
a
second
to
complete,
so
just
dramatically
faster,
and
we
did
this
largely
by
caching.
Some
of
the
XML
results.
A
A
One
other
area
that
we're
working
on
to
also
improve
performance
is
in
particular
on
the
live
tracing
architecture
so
currently
for
incremental
logging
or
lab
tracing.
We've
called
the
two
name's
criminal
logging.
This
is
required.
If
you
don't
want
any.
If
you
don't
want
to
use
NFS,
forget
left
essentially
as
the
logs
come
in
just
so
that
we
can
show
them
in
this
di
screen.
We
actually
have
to
upload
them
in
chunks,
and
currently
this
is
not
enabled
I
kept
calm
due
to
performance
problems.
It
does
work,
however,
at
scale.
A
So
we
are
particularly
excel
at
this
one,
and
the
first
step
here
is
to
actually
re-enable
the
existing
features
as
a
first
lecture
issue
here.
To
do
so,
and
that
will
allow
us
to
then
take
a
look
at
what's
happening
and
try
to
understand
where
the
performance
problems
are
occurring.
So
we
can
then
go
ahead
and
fix
them.
A
Other
areas
that
were
working
on
here
is
working
to
detect,
understand
how
often
we
use
the
cache
sequel
queries.
This
can
end
up
being
actually
quite
expensive,
in
particular
on
memory,
and
you
can
see
here
in
the
rationale
from
Camille
and
so
we'll
be
working
to
understand
these,
and
also
do
some
testing
to
reduce
the
amount
of
them
and
understand
their
impact.
This
also
helped
it
can
inform
time
since
our
development
practices,
whether
we
should
be
using
them
and
if
so,
when.
A
Lack
of
other
improvements
that
we're
working
on
here
is
also
improving.
The
memory
sharing.
This
is
for
both
action,
cable
and
web,
but
also
for
cyclic
cluster
as
well
and
in
general.
We're
trying
to
do
here
is
we
are
trying
to
do
more
of
our
loading
before
we
fork.
The
idea
here
is
that,
since
we
can
load
more
this
memory
before
we
fork,
we
can
end
up
sharing
more
of
this
memory
as
opposed
to
having
to
load
it
after
forking
in
individual
processes,
which
will
cause
us
to
consume
more
memory.
A
We
are
blocked
into
a7
for
a
couple
of
other
reasons,
but
there
is
some
nice
improvements
to
the
garbage
collection
which
will
allow
us
to
be
able
to
essentially
have
a
better
memory
usage
because
it
ends
up
giving
less
pages
during
the
process
of
doing
garbage
collection,
and
so
this
will
allow
us
to
end
up
sharing
more
memory
across
the
processes.
Again
continue
at
FEMA
mentioned
earlier,
around
kind
of
in
the
preload
and
managing
the
forking
process.
A
So
we
are
excited
about
that
and
then
also
some
other
more
minor
issues
that
we're
working
to
continue
to
improve
things.
So
that's
it
for
our
release.
Here
for
13.2,
we
have
a
large
number
of
improvements
to
make,
and
particularly
our
around
helping
us
to
ensure
that
we
understand
how
to
get
this
performing,
which
will
help
us
inform
future
improvements
and
future
parties.
Prioritization
decisions,
improving
the
performance
in
particular
like
the
ones
like
the
rendering
of
markdown
and
also
reducing
memory
consumption.
So
we're
really
excited
about
this.